G1.1
Mathematical methods for the analysis of potential field data and geodetic time series

G1.1

EDI
Mathematical methods for the analysis of potential field data and geodetic time series
Co-organized by EMRP2
Convener: Volker Michel | Co-conveners: Christian Gerhards, Anna KlosECSECS, Wieslaw Kosek, Michael Schmidt
Presentations
| Tue, 24 May, 10:20–11:37 (CEST)
 
Room -2.16

Presentations: Tue, 24 May | Room -2.16

Chairperson: Anna Klos
10:20–10:27
|
EGU22-400
|
ECS
|
Virtual presentation
Hamideh Taherinia and Shahrokh Pourbeyranvand

Earthquakes are one of the most devastating natural disasters, and their impact on human society, in terms of casualties and economic damage, has been significant throughout history. Earthquake prediction can aid in preparing for this major event, and its purpose is to identify earthquake-prone areas and reduce their financial and human losses. Any parameter that changes before the earthquake in a way that one can predict the earthquake with a careful study of its variations is called a precursor. Recently, more attention has been paid to geophysical, geomagnetic, geoelectrical, and electromagnetic precursors. In the present study, the geomagnetic data of three stations, obtained through INTERMAGNET, with a distance of less than 500 km to the 5 Sep. Japan earthquake are investigated. Then the method of characteristic curves is used to remove the effect of diurnal variation of the geomagnetic field. After that, by examining the anomalies which are more distinct after implementation of the method, the cases are matched with the seismic activities of the region. By separating the noise from the desired signal, a pure anomaly can be observed. Among the various magnetic components, the horizontal components are more suitable than the others for the proposed process because of more variations in the geomagnetic field in the vertical direction due to the presence of the geomagnetic gradient. In the present study, one year of magnetic data, including three stations and for X, Y, and Z components, and seismic data for Japan are used to implement this method. The method is based on plotting different magnetic field components in specific time intervals in the same 24 hours frame. This will lead to a plot which shows the geomagnetic nature of each component of the geomagnetic field for each station After averaging the values for every point at the horizontal axis of the plot, which is a unit of time depending on the sampling (hourly mean, minute mean, etc.) a curve will be obtained which is called the characteristic curve. Then we reduce the characteristic curve values from geomagnetic data to reveal the anomalies, free of diurnal variation noise so that the possible anomalies related to earthquakes will be shown more distinctly. After drawing the components of the magnetic field and removing the daily changes from each of the components, we can observe the anomalies related to the earthquakes to justify the observed anomalies better and considering the standard deviation for each component, pre-seismic anomalies have a more significant distinction than the original data for being studied as a seismic precursor. After all, further investigation revealed the presence of a magnetic storm during the time period under investigation. This led to uncertainty in the feasibility of using the geomagnetic data in the present study as a precursor. However, several other pieces of evidence confirm the existence of precursory geomagnetic phenomena before earthquakes. Thus based on the current data and results, it is not possible to conclude the applicability of precursory geomagnetic studies and further data and studies are required.

How to cite: Taherinia, H. and Pourbeyranvand, S.: Investigation of earthquake precursors using magnetometric stations in Japan, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-400, https://doi.org/10.5194/egusphere-egu22-400, 2022.

10:27–10:34
|
EGU22-1545
|
ECS
|
On-site presentation
Magued Al-Aghbary, Mohamed Sobh, and Christian Gerhards

Reliable and direct geothermal heat flow (GHF) measurements in Africa are sparse. It is a challenging task to create a map that reflects the GHF and covers the African continent in in its entirety.

We approached this task by training a random forest regression algorithm. After carefully tuning the algorithm's hyperparameters, the trained model relates the GHF to various geophysical and geological covariates that are considered to be statistically significant for the GHF. The covariates are mainly global datasets and models like Moho depth, Curie depth, gravity anomalies. To improve the predictions, we included some regional datasets. The quality and reliability of the datasets are assessed before the algorithm is trained.

The model's performance is validated against Australia, which has a large database of GHF measurements. The predicted GHF map of Africa shows acceptable performance indicators and is consistent with existing recognized GHF maps of Africa.

How to cite: Al-Aghbary, M., Sobh, M., and Gerhards, C.: A first attempt at a continental scale geothermal heat flow model for Africa, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1545, https://doi.org/10.5194/egusphere-egu22-1545, 2022.

10:34–10:41
|
EGU22-2447
|
ECS
|
On-site presentation
|
Viviana Wöhnke, Annette Eicker, Laura Jensen, and Matthias Weigelt

Water mass changes at and below the surface of the Earth cause changes in the Earth’s gravity field which can be observed by at least three geodetic observation techniques: ground-based point measurements using terrestrial gravimeters, space-borne gravimetric satellite missions (GRACE and GRACE-FO) and geometrical deformations of the Earth’s crust observed by GNSS. Combining these techniques promises the opportunity to compute the most accurate (regional) water mass change time series with the highest possible spatial and temporal resolution, which is the goal of a joint project with the interdisciplinary DFG Collaborative Research Centre (SFB 1464) "TerraQ – Relativistic and Quantum-based Geodesy".

A method well suited for data combination of time-variable quantities is the Kalman filter algorithm, which sequentially updates water storage changes by combining a prediction step with observations from the next time step. As opposed to the standard way of describing gravity field variations by global spherical harmonics, we will introduce space-localizing radial basis functions as a more suitable parameterization of high-resolution regional water storage change. A closed-loop simulation environment has been set up to allow the testing of the setup and the tuning of the algorithm. In a first step only simulated GRACE data together with realistic correlated observation errors will be used in the Kalman filter to sequentially update the parameters of a regional gravity field model. However, the implementation was designed to flexibly include further observation techniques (GNSS, terrestrial gravimetry) at a later stage. This presentation will outline the Kalman filter framework, introduce the regional parameterization approach, and address challenges related to, e.g., ill-conditioned matrices and the proper choice of the radial basis function parameterization.

How to cite: Wöhnke, V., Eicker, A., Jensen, L., and Weigelt, M.: Regional modeling of water storage variations in a Kalman filter framework, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2447, https://doi.org/10.5194/egusphere-egu22-2447, 2022.

10:41–10:48
|
EGU22-2963
|
ECS
|
Virtual presentation
|
Naomi Schneider and Volker Michel

The approximation of the gravitational potential is still of interest in geodesy as it is utilized, e.g., for the mass transport of the Earth. The Inverse Problem Matching Pursuits (IPMPs) were proposed as alternative solvers for these kind of problems. They were successfully tested on diverse applications, including the downward continuation of the gravitational potential.

It is well-known that, for such linear inverse problems on the sphere, there exist a variety of global as well as local basis systems, e.g. spherical harmonics, Slepian functions as well as radial basis functions and wavelets. Each type has its specific pros and cons. Nonetheless, approximations are often represented in only one of them. On the contrary, the IPMPs enable an approximation as a mixture of diverse trial functions. They are chosen iteratively from an intentionally overcomplete dictionary such that the Tikhonov functional is reduced. However, an a-priori defined, finite dictionary has its own drawbacks, in particular with respect to efficiency.

Thus, we developed a learning add-on which uses an infinite dictionary instead while simultaneously reducing the computational cost. The add-on is implemented as constrained non-linear optimization problems with respect to the characteristic parameters of the different basis systems. In this talk, we give details on the matching pursuits and, in particular, the learning add-on and show recent numerical results with respect to the downward continuation of the gravitational potential.

How to cite: Schneider, N. and Michel, V.: Experimenting with automatized numerical methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2963, https://doi.org/10.5194/egusphere-egu22-2963, 2022.

10:48–10:55
|
EGU22-10879
|
On-site presentation
|
Frederik J Simons, Arthur P. Guillaumin, Adam M. Sykulski, and Sofia C. Olhede

We establish a theoretical framework, an algorithmic basis, and a computational workflow for the statistical analysis of multi-variate multi-dimensional random fields - sampled (possibly irregularly, with missing data) and finite (possibly bounded irregularly). Our research is practically motivated by geodetic and scientific problems of topography and gravity analysis in geophysics and planetary physics, but our solutions fulfill the more general need for sophisticated methods of inference that can be applied to massive remote-sensing data sets, and as such, our mathematical, statistical, and computational solutions transcend any particular application. The generic problem that we are addressing is: two (or more) spatial fields are observed, e.g., by passive or active sensing, and we desire a parsimonious statistical description of them, individually and in their relation to one another. We consider the fields to be realizations of a random process, parameterized as a Matern covariance structure, a very flexible description that includes, as special cases, many of the known models in popular use (e.g. exponential, autoregressive, von Karman, Gaussian, Whittle, ...) Our fundamental question is how to find estimates of the parameters of a Matern process, and the distribution of those estimates for uncertainty quantification. Our answer is, fundamentally: via maximum-likelihood estimation.  We now provide a computationally and statistically efficient method for estimating the parameters of a stochastic covariance model observed on a regular spatial grid in any number of dimensions. Our proposed method, which we call the Debiased Spatial Whittle likelihood, makes important corrections to the well-known Whittle likelihood to account for large sources of bias caused by boundary effects and aliasing. We generalise the approach to flexibly allow for significant volumes of missing data including those with lower-dimensional substructure, and for irregular sampling boundaries. We build a theoretical framework under relatively weak assumptions which ensures consistency and asymptotic normality in numerous practical settings including missing data and non-Gaussian processes. We also extend our consistency results to multivariate processes. We provide detailed implementation guidelines which ensure the estimation procedure can still be conducted in O(n log n) operations, where n is the number of points of the encapsulating rectangular grid, thus keeping the computational scalability of Fourier and Whittle-based methods for large data sets. We validate our procedure over a range of simulated and real world settings, and compare with state-of-the-art alternatives, demonstrating the enduring practical appeal of Fourier-based methods, provided they are corrected and augmented by the procedures that we developed.

How to cite: Simons, F. J., Guillaumin, A. P., Sykulski, A. M., and Olhede, S. C.: Efficient Parameter Estimation of Sampled Random Fields Using the Debiased Spatial Whittle Likelihood, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10879, https://doi.org/10.5194/egusphere-egu22-10879, 2022.

10:55–11:02
|
EGU22-3240
|
ECS
|
Virtual presentation
Hilary Martens, Mark Simons, Luis Rivera, Martin van Driel, and Christian Boehm

The solid Earth’s deformation response to surface loading by ocean tides depends on the material properties of Earth’s interior. Comparisons of observed and predicted oceanic load tides can therefore shed new light on the structure of the crust and mantle. Recent advances in satellite geodesy, including altimetry and Global Navigation Satellite Systems (GNSS), have improved the accuracy and spatial resolution of ocean-tide models as well as the ability to measure precisely three-dimensional surface displacements caused by ocean tidal loading. Here, we investigate oceanic load tides in the western United States using measurements of surface displacement made by a dense array of GNSS stations in the Network of the Americas (NOTA). Dominant tidal harmonics from three frequency bands are considered (M2, O1, Mf). We compare the empirical load-tide estimates with predictions of surface displacements made by the LoadDef software package (Martens et al., 2019), with the goal of refining models for Earth’s (an)elastic and density structure through the crust and upper mantle of the western US.

How to cite: Martens, H., Simons, M., Rivera, L., van Driel, M., and Boehm, C.: Oceanic load tides in the western United States, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3240, https://doi.org/10.5194/egusphere-egu22-3240, 2022.

11:02–11:09
|
EGU22-1590
|
ECS
|
On-site presentation
|
Nihal Tekin Ünlütürk and Uğur Doğan

In this study, the effects of seasonal variation on the vertical position accuracy of GPS calculated by time series analysis of continuous GPS stations were investigated. Weather changes, water vapor in the atmosphere affect the position accuracy of GPS and cause fluctuations in GPS height values. It is also known that the height component has more air passage changes. Since it is easier to interpret the effects of the height component due to its topographic features and seasonal changes are more effective than the rest of the country, four continuous GPS stations, covering the 2014-2019 date range, from the Turkish National Permanent GNSS Network (TUSAGA-Aktif) were used in the East of Turkey were chosen. The daily coordinates of the stations were obtained as a result of GAMIT/GLOBK software solution. By applying time series analysis to the daily coordinate values of the stations, statistically significant trend, periodic and stochastic components of the stations were determined. As a result of the analysis, the vertical annual velocities of the stations and the standard deviations of the velocities were determined.

For the stations determined according to the ellipsoid heights, the velocity and standard deviation values of the height component were calculated for each month, season and year. As the ellipsoid height increases, the velocity and its standard deviation values decrease. While the minimum velocity values are observed for the station with the lowest ellipsoidal height in winter, for the station with the highest ellipsoidal height in autumn, the minimum their standard deviation values are determined in winter for the station with the lowest ellipsoidal height, and in summer for the station with the highest ellipsoidal height. According to the results obtained, the coordinate displacements caused by seasonal variation may be important and their effects should be considered especially in high precision geodetic surveys.

In addition, the velocity values of the stations were calculated for different years, and a decrease was observed in the height component depending on the observation duration. As the observation duration for the height component increases, both the velocity values and their standard deviation values decrease. In order to avoid velocity estimation error completely, the data length should be more than 4.5 years.

Keywords: GPS height compenent, GPS time series, Seasonal effect, Velocity estimation

How to cite: Tekin Ünlütürk, N. and Doğan, U.: The Effects of Seasonal Variation on GPS Height Component, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1590, https://doi.org/10.5194/egusphere-egu22-1590, 2022.

11:09–11:16
|
EGU22-3605
|
ECS
|
On-site presentation
|
Kevin Gobron, Paul Rebischung, Olivier de Viron, Alain Demoulin, and Michel Van Camp

Understanding and modelling the properties of the stochastic variability -- often referred to as noise -- in geodetic time series is crucial to obtain realistic uncertainties for deterministic parameters, e.g., long-term velocities, and helpful in characterizing non-modelled processes. With the ever-increasing span of geodetic time series, it is expected that additional observations would help better understanding the low-frequency properties of the stochastic variability. In the meantime, recent studies evidenced that the choice of the functional model for the time series may bias the assessment of these low-frequency stochastic properties. In particular, the presence of frequent offsets, or step discontinuities, in position time series tends to systematically flatten the periodogram of position residuals at low frequencies and prevents the detection of possible random-walk-type variability.

 

In this study, we investigate the ability of frequently-used statistical tools, namely the Lomb-Scargle periodogram and Maximum Likelihood Estimation (MLE) method, to correctly retrieve low-frequency stochastic properties of geodetic time series in the presence of frequent offsets. By evaluating the biases of each method for several functional models, we demonstrate that neither of these tools is reliable for low-frequency investigation. By assessing alternative approaches, we show that using  Least-Squares Harmonic Estimation and Restricted Maximum Likelihood Estimation (RMLE) solves part of the problems reported by previous works. However, we evidence that, even when using those optimal methods, the presence of frequent offsets inevitably blurs the estimated low-frequency properties of geodetic time series by increasing low-frequency stochastic parameter uncertainties more than that of other stochastic parameters.

2.14.0.0
2.14.0.0

How to cite: Gobron, K., Rebischung, P., de Viron, O., Demoulin, A., and Van Camp, M.: Impact of Offsets on Assessing the Low-Frequency Stochastic Properties of Geodetic Time Series, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3605, https://doi.org/10.5194/egusphere-egu22-3605, 2022.

11:16–11:23
|
EGU22-3766
|
Virtual presentation
gael kermarrec, davide cucci, jean-philippe montillet, and stephane guerrier

The modelling of the stochastic noise properties of GNSS daily coordinate time series allows to associate realistic uncertainties with the estimated geophysical parameters (e.g. tectonic rate, seasonal signal). Up to now, geodetic software based on Maximum Likelihood Estimation (MLE) jointly inverse a functional (i.e. geophysical parameters) and stochastic noise models. This method suffers from a computational time exponentially increasing  with the length of the GNSS time series, which becomes an issue when considering that the first permanent stations were installed in the late 80’s – early 90’s having recorded more than 25 years of geodetic data. Combining this issue with the tremendous number of permanent stations blanketing the world (i.e. more than 20,000 stations), the processing time in the analysis of large GNSS network is a key parameter. 

Here, we propose an alternative to the MLE called the Generalized Method of Wavelet Moments (GMWM). This method is based on the wavelet variance, i.e. a decomposition of the time series using the Haar wavelet. We show the first results and compare them with the MLE in terms of computational efficiency and absolute error on the estimated parameters. The versatility of this new method is its flexibility of choosing various stochastic noise models (e.g., Matérn, power law, flicker, white noise, random walk), and its robustness against outliers. Additional developments to account for deterministic components such as seasonal signal, offsets or post-seismic relaxation is easy. We explain the principle beyond the method and apply it to both simulated and real GNSS coordinate time series. Our first results are compared with the estimation using  the Hector software.

How to cite: kermarrec, G., cucci, D., montillet, J., and guerrier, S.: Application of the Generalized Method of Wavelet Moments to the analysis of daily position GNSS time series., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3766, https://doi.org/10.5194/egusphere-egu22-3766, 2022.

11:23–11:30
|
EGU22-6864
|
Virtual presentation
|
Yangfei Hou, Junping Chen, and Yize Zhang

Considering that the precise orbit and clock products provided by international GNSS service (IGS) were of a magnitude different from those required by the global geodetic observing system (GGOS) in accuracy of 1 mm, the Lomb-Scargle periodogram was used to analyze the systematic deviation and the periodical deviation between the precise products of GNSS analysis centers (ACs) and the IGS final precision products. On this basis, a deviation correction model was established based on the least square method for the correction of precision parameters. The deviation correction results show that the standard deviation of the precise clock decreases by 15.4% on average, the standard deviation of the radial orbit decreases by 33.3% on average, and the standard deviation of the ensemble effects of radial orbit and clock decreases by 24.0% on average. The signal-in-space user ranging error (SISURE) also significantly decreases from the level of centimeters to millimeters. The positioning verification results of the 15 stations show that the consistency between the positioning results of the precision products using single AC and the positioning results of IGS final precision products is also improved after deviation correction, and the average improvement ratio of three ACs is 14.3%. It is proved that the deviation correction model can effectively improve the consistency between the precision products of ACs and the final products of IGS.

How to cite: Hou, Y., Chen, J., and Zhang, Y.: Characteristics analysis and correction of GPS precise products in analysis centers based on Lomb-Scargle periodogram, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6864, https://doi.org/10.5194/egusphere-egu22-6864, 2022.

11:30–11:37
|
EGU22-8369
|
Virtual presentation
D. Ugur Sanli, Ece Uysal, Deniz Oz Demir, and Huseyin Duman

The determination of GPS velocity accuracy and velocity uncertainty has been one of the topics of interest to researchers in recent years. Velocity and velocity uncertainty from continuous GPS data have been studied as deeply as possible, but velocity and velocity uncertainty from campaign measurements are still the subject of ongoing research. Recent studies have shown that the positioning accuracy of GPS PPP is latitude-dependent. At the same time, the velocity and velocity uncertainty produced by the PPP should also be treated in the same way. In this sense, it is necessary to make a global assessment. NASA JPL offers researchers a rich global database constituting GNSS time series analysis results across the globe. In this study, an experiment is conducted to determine the velocity quality of GPS campaign measurements from around 30 globally distributed stations of the IGS network. This time, our motivation is to determine the accuracy and uncertainty of GPS campaign rates from at least 4 years of data, which are performed annually on the same date. As in our previous study, we decimated coordinate components from the NASA JPL time series to generate GPS campaigns. In other words, we use 24-hour data for annual campaign measurements and repeat campaigns on three consecutive days each year. The deformation rates from NASA JPL were considered real and the accuracy of the deformation rates produced by our experiments was evaluated. Preliminary findings suggest that velocity deviations from the truth may be more severe, at 4 mm/year horizontally and 10 mm/year vertically. In the presentation, we discuss the ground truths that lead to this bias and the global distribution of velocity accuracy and velocity uncertainty.

How to cite: Sanli, D. U., Uysal, E., Oz Demir, D., and Duman, H.: Accuracy of velocities from annually repeated GPS campaigns, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8369, https://doi.org/10.5194/egusphere-egu22-8369, 2022.