G1.3 | Data science and machine learning in geodesy
EDI
Data science and machine learning in geodesy
Convener: Benedikt Soja | Co-conveners: Maria Kaselimi, Milad AsgarimehrECSECS, Sadegh ModiriECSECS, Alex SunECSECS
Orals
| Tue, 16 Apr, 14:00–15:45 (CEST)
 
Room -2.91
Posters on site
| Attendance Mon, 15 Apr, 10:45–12:30 (CEST) | Display Mon, 15 Apr, 08:30–12:30
 
Hall X2
Orals |
Tue, 14:00
Mon, 10:45
This session aims to showcase novel applications of data science and machine learning methods in geodesy.

In recent years, the exponential growth of geodetic data from various observation techniques has created challenges and opportunities. Innovative approaches are required to efficiently handle and harness the vast amount of geodetic data available nowadays for scientific purposes, for example when dealing with “big data” from Global Navigation Satellite System (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Likewise, numerical weather models and other environmental models important for geodesy come with ever-growing resolutions and dimensions. Strategies and methodologies from the fields of data science and machine learning have shown great potential not only in this context but also when applied to more limited data sets to solve complex non-linear problems in geodesy.

We invite contributions related to various aspects of applying methods from data science and machine learning (including both shallow and deep learning techniques) to geodetic problems and data sets. We welcome investigations related to (but not limited to): more efficient and automated processing of geodetic data, pattern and anomaly detection in geodetic time series, images or higher-dimensional data sets, improved predictions of geodetic parameters, such as Earth orientation or atmospheric parameters into the future, combination and extraction of information from multiple inhomogeneous data sets (multi-temporal, multi-sensor, multi-modal fusion), feature selection, super-sampling of geodetic data, and improvements of large-scale simulations. We strongly encourage contributions that address crucial aspects of uncertainty quantification, interpretability, and explainability of machine learning outcomes, as well as the integration of physical models into data-driven frameworks.

By combining the power of artificial intelligence with geodetic science, we aim to open new horizons in our understanding of Earth's dynamic geophysical processes. Join us in this session to explore how the fusion of physics and machine learning promises advantages in generalization, consistency, and extrapolation, ultimately advancing the frontiers of geodesy.

Session assets

Orals: Tue, 16 Apr | Room -2.91

Chairperson: Benedikt Soja
14:00–14:05
14:05–14:15
|
EGU24-11029
|
ECS
|
On-site presentation
|
Nhung Le, Benjamin Männel, Mahdi Motagh, Andreas Brack, and Harald Schuh

Abstract:

Machine Learning (ML) is emerging as a powerful tool for data analysis. Anomaly detection based on classical approaches is sometimes limited in processing speed on big data, especially for massive datasets. Meanwhile, quantum algorithms have been shown to have the potential for optimization, scenario simulation, and artificial intelligence. Thus, this study combines quantum algorithms and ML to improve the binary classification performance of ML models for better sensitivity of surface deformation detection. We experimented with GNSS-InSAR combination data to identify significant deformation regions in Northern Germany. We classify the movement characteristics based on four main features: vertical movement velocities, root mean square errors, standard deviations, and outliers in the GNSS-InSAR time series. Our primary results reveal that the classification accuracy based on Quantum Machine Learning (QML) is outstanding compared to the pure ML technique. Specifically, on the same sample dataset, the classification performance of the neural network based on pure ML is only around 50 to 70%, while that of the QML technique can reach ~90%. The significant deformation regions are concentrated in the river basins of Elbe, Weser, Ems, and Rhine, where the average surface subsidence speed varies around -4.5 mm/yr. Also, we suggest dividing the surface movement features in Northern Germany into five classes to reduce the effect of the data quality variety and algorithm uncertainty. Our findings will advocate the development of quantum computing applications as well as promote the potential of the QML for deformation analyses. 

Keywords:

Quantum Machine Learning, Binary Classification, GNSS-InSAR Data, Deformation Detection.

How to cite: Le, N., Männel, B., Motagh, M., Brack, A., and Schuh, H.: Quantum Machine Learning for Deformation Detection: Application for InSAR Point Clouds, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11029, https://doi.org/10.5194/egusphere-egu24-11029, 2024.

14:15–14:25
|
EGU24-10467
|
On-site presentation
Mimi Peng, Mahdi Motagh, Zhong Lu, Zhuge Xia, Zelong Guo, Chaoying Zhao, and Yinghui Quan

Geological hazards caused by both natural forces and human-induced disturbances, such as land subsidence, earthquakes, tectonic motion, mining activities, coastal erosion, volcanic activities, and permafrost alterations, cause great adverse effects to earth’s surface. The preservation of a comprehensive record detailing past, present, and future surface movements is imperative for effective disaster risk mitigation and property protection. Interferometric Synthetic Aperture Radar (InSAR) is widely recognized as a highly effective and extensively employed geodetic technique for comprehending the spatiotemporal evolution of historical ground surface deformation. However, it only reveals the past deformation evolution process and the deformation update is slowly considering the long revisit cycle of satellites. Deformation evolution in the future is also crucial for preventing and mitigating geological hazards. Unlike traditional mathematical-statistical models and physical models, machine learning methods provide a new perspective and possibility to efficiently and automatically mine the time series information over a large-scale area. In the context of InSAR time series prediction over large areas, the previous researches do not consider the spatiotemporal heterogeneity caused by various factors over a large-scale area and mainly focus on one typical deformation point.

Therefore, in this study, we present a framework designed to predict large-scale spatiotemporal InSAR time series by integrating independent component analysis (ICA) and a Long Short-Term Memory (LSTM) machine learning model. This framework is developed with a specific focus on addressing spatiotemporal heterogeneity within the dataset. The utilization of the ICA method is employed to identify and capture the displacement signals of interest within the InSAR data, enabling the characterization of independent time series signals associated with various natural or anthropogenic processes. Additionally, a K-means clustering approach is incorporated to partition the study area into spatiotemporal homogeneity subregions across a large-scale region, aiming to mitigate potential decreases in model accuracy caused by data heterogeneity. Subsequently, LSTM models are constructed for each cluster, and optimal parameters are determined. The proposed framework is rigorously tested using simulated datasets and validated against two real-world cases—land subsidence in the Willcox Basin and post-seismic deformation following the Sarpol-e Zahab earthquake. Comparative analysis demonstrates that the proposed model surpasses the original LSTM, resulting in a 34% and 17% improvement in average prediction accuracy, respectively. The spatial prediction results in 60 days over the two cases are mapped with high accuracy.

This study introduces an integrated framework that seamlessly integrates InSAR data processing with machine learning techniques such as LSTM to enhance our ability to predict deformation over large-scale geographical areas. The adaptability of the proposed model has made it an alternative to numerical or empirical models, especially when detailed on-site data is scarce or challenging to obtain. While our immediate applications have focused on scenarios on land subsidence and post-seismic deformation, the broader implications of our methodology are evident. We anticipate the proposed framework will be expanded to various application domains, including mining, infrastructure stability, and other situations involving sustained motions. The proposed framework will ultimately contribute to more informed decision-making and risk assessment in complex dynamic systems.

How to cite: Peng, M., Motagh, M., Lu, Z., Xia, Z., Guo, Z., Zhao, C., and Quan, Y.: Improving ground deformation prediction in satellite InSAR using ICA-assisted RNN model , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10467, https://doi.org/10.5194/egusphere-egu24-10467, 2024.

14:25–14:35
|
EGU24-10706
|
ECS
|
On-site presentation
Laura Crocetti, Rochelle Schneider, and Benedikt Soja

Global Navigation Satellite Systems (GNSS) are best known for their accurate positioning, navigation and timing capabilities. In total, over 20.000 permanent high-grade GNSS stations are available worldwide, the positions of which are monitored with millimeter accuracy. Thanks to the high accuracy and the fact that these stations are mounted on the ground, subtle movements due to hydrological changes and crustal deformation can be observed. Thus, the GNSS observations contain valuable geophysical information. Although many geodetic applications require these movements to be properly understood and potentially corrected, this is not trivial due to the complexity of the interactions within the Earth’s system. Therefore, there is a severe lack of available models explaining residual GNSS station movements beyond conventionally modeled effects. On the opposite, if these movements are properly understood, GNSS observations might contribute to the correct interpretation of emerging environmental changes.

This study exploits the wealth of satellite-derived Earth observation (EO) data to derive suitable models to explain GNSS station movements. We combine GNSS station coordinate time series and EO variables with the help of machine learning techniques to benefit from various types of information. While the target vector consists of concatenated GNSS station coordinate time series over Europe, EO variables such as precipitation, soil water, snow water equivalent, and land cover data are used as input features. Different machine learning models, including Random Forest, XGBoost, and Multilayer Perceptron, are investigated and compared. Additionally, a sensitivity analysis is performed to determine the individual impact of EO variables to quantify what drives GNSS movements, which in turn, might allow monitoring the corresponding Earth system processes based on GNSS observations.

How to cite: Crocetti, L., Schneider, R., and Soja, B.: Explaining GNSS station movements based on Earth observation data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10706, https://doi.org/10.5194/egusphere-egu24-10706, 2024.

14:35–14:45
|
EGU24-10290
|
ECS
|
On-site presentation
Kaan Çökerim, Jonathan Bedford, and Henryk Dobslaw

Displacement time series from Global Navigation Satellite System (GNSS) at daily rates are used commonly to investigate and understand the processes controlling Earth's surface deformation originating from tectonic processes such as postseismic slip, slow slip events and viscoelastic relaxation, but also non-tectonic processes such as reflectometry, atmospheric sensing and remote sensing. For each individual research field, different parts of the total recorded GNSS displacement time series are of intrest. A major difficulty is the modeling and isolation of non-tectonic seasonal signals that are established to be related with non-tidal surface loading.

In the past, many methods were developed with some success based on Kalman filters, matrix factorization and various approaches using curve fitting to separate the tectonic and non-tectonic contributions. However, these methods still have some difficulties in  isolating the seasonal loading signals especially in the presence of interannual variations in the seasonal loading pattern and steps in the time series.

We present here a deep learning model trained on a globally distributed, continuous 8-10 years long dataset of ~8000 stations PPP-GNSS displacement time series from NGL to estimate the seasonal loading signals using a global non-tidal surface loading model developed at ESM-GFZ. We compare our model to other statistical methods for isolation of the seasonal with the established method of subtraction of the non-tidal surface loading signals (hydrological loading, and non-tidal atmospheric and oceanic loading) as our baseline. We also present the evaluation of our model and its capabilities in reducing the seasonal loading signal as well as parts of the high-frequency scattering in the original GNSS time series.

 

How to cite: Çökerim, K., Bedford, J., and Dobslaw, H.: A Globally Trained Deep Learning Model for Estimation of Seasonal Residual Signals in GNSS displacement time series, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10290, https://doi.org/10.5194/egusphere-egu24-10290, 2024.

14:45–14:55
|
EGU24-3117
|
On-site presentation
Marcel Iten, Shuyin Mao, and Benedikt Soja

Accurate ionospheric models are essential for single-frequency high-precision Global Navigation Satellite Systems (GNSS) applications. Global ionospheric maps (GIMs), which depicts the global distribution of vertical total electron content (VTEC), are a widely used ionospheric product provided by the International GNSS Service (IGS). To meet the increasing need for real-time applications, the IGS real-time service (RTS) has been established and offers real-time (RT) GIMs that can be used for real or near-real time applications. However, the accuracy of present RT GIMs is still significantly lower compared with the final GIMs. IGS RT GIMs show an RMSE of 3.5-5.5 TECU compared to the IGS final GIMs. In this study, we focus on enhancing the accuracy of RT GIMs through machine learning (ML) approaches, specifically a classical Convolutional Neural Network (CNN) and a Generative Adversarial Network (GAN). The objective is to bridge the gap between the RT GIMs and the final IGS GIMs. This is achieved by using RT GIMs as input and final GIMs as target. The ML approach is applied to the IGS combined RT GIMs and Universitat Politècnica de Catalunya (UPC) RT GIMs. The performance of the improved RT GIMs is evaluated in comparison to the combined IGS final GIM.

We consider over 11'000 pairs of RT GIMs and final GIMs. Over a comprehensive test period spanning 3.5 months, the proposed approach shows promising results with an enhancement of more than 30% in accuracy of RT GIMs. Especially for regions with high VTEC values, which are most critical, the results show a significant improvement. The results demonstrate the model’s great potential in generating more accurate and refined real-time maps.

The integration of ML techniques proves to be a promising avenue for refining and augmenting the precision of real-time ionospheric maps, thereby addressing critical needs in the realm of space weather monitoring and single-frequency applications.

How to cite: Iten, M., Mao, S., and Soja, B.: Enhanced Real-time Global Ionospheric Maps using Machine Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3117, https://doi.org/10.5194/egusphere-egu24-3117, 2024.

14:55–15:05
|
EGU24-1853
|
ECS
|
On-site presentation
Federica Fuso, Michela Ravanelli, Laura Crocetti, and Benedikt Soja

It is known that natural hazards such as volcanic eruptions, earthquakes, and tsunamis can trigger acoustic and gravity waves (AGWs) that could reach the ionosphere and generate electron density disturbances known as Travelling Ionospheric Disturbances (TIDs). These disturbances can be investigated in terms of variations in the ionospheric total electron content (TEC) measurements, collected by continuously operating ground-based Global Navigation Satellite Systems (GNSS) receivers. The VARION (Variometric Approach for Real-Time Ionosphere Observation) algorithm is a well-known real-time tool for estimating TEC variations. It is based on single-time differences of geometry-free combinations of GNSS carrier-phase measurements.

Artificial Intelligence (AI), particularly in machine learning, offers computational efficiency and data handling, leading to its exploration in ionospheric studies. In this context, the abundance of data allows the exploration of a VARION-based machine learning classification approach to detect TEC perturbation. For this purpose, we used the VARION-TEC variations from the 2015 Illapel earthquake and tsunami, leveraging the distinct ionospheric response triggered by the event.

We employed machine learning algorithms, specifically Random Forest (RF) and XGBoost (XGB), using the VARION-core observations (i.e., dsTEC/dt) as input features. We formulated a binary classification problem using supervised machine learning algorithms and manually labelled the time frames of TEC perturbations as the target variable. We considered two elevation cut-off time series, namely 15° and 25°, to which we applied the classifier. XGBoost with a 15° elevation cut-off dsTEC/dt time series reached the best performance, achieving an F1 score of 0.77, recall of 0.74, and precision of 0.80 on the test data. More in detail, regarding the testing samples, the model accurately classified 183 out of 247 (74.09%) samples of sTEC variations related to the earthquake and tsunami (True Positives, TP). Moreover, 2975 out of 3021 (98.49%) testing samples were correctly classified as containing no sTEC variations caused by an earthquake (True Negatives, TN). However, 64 out of 247 samples (25.91%) were erroneously classified as not containing sTEC variations related to the event (False Negatives, FN), while 46 out of 3021 (1.51%) were wrongly classified as containing sTEC variations related to the earthquake and tsunami (False Positives, FP).

This model showed a 75-second average deviation in predicting perturbation time frames for testing links, equivalent to 5 steps in the 15-second time series intervals. This highlights the algorithm's potential for early detection of ionospheric perturbations from earthquakes and tsunamis, aiding in early warning purposes.

Finally, the model efficiently detects TIDs within 2-3 minutes, showing an impressive computational efficiency, crucial for effective early warning systems. It relies only on the VARION-generated real-time TEC time series (dsTEC/dt), enabling its application in an operational real-time setting using real-time GNSS data.

In conclusion, this work demonstrates high-probability TEC signature detection by machine learning for earthquakes and tsunamis, which can be used to enhance tsunami early warning systems.

How to cite: Fuso, F., Ravanelli, M., Crocetti, L., and Soja, B.: Using Machine Learning for identifying TEC signatures related to earthquakes and tsunamis: the 2015 Illapel event case study, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1853, https://doi.org/10.5194/egusphere-egu24-1853, 2024.

15:05–15:15
|
EGU24-12715
|
ECS
|
On-site presentation
Tianqi Xiao, Milad Asgarimehr, Caroline Arnold, Daixin Zhao, Lichao Mou, and Jens Wickert

Spaceborne GNSS Reflectometry (GNSS-R) is a novel remote sensing technique providing accumulating data volume with global coverage and enhanced temporal resolution. The reflected pre-existing L-Band signal of opportunity transmitted by the Global Navigation Satellite System contains information about the reflection surface properties which can be quantified and converted into data products for further studies. To retrieve such information, Artificial intelligence (AI) models are implemented to estimate geophysical parameters based on the GNSS-R observations. With more and more complex deep learning models being proposed and more and more input features being considered, understanding the decision-making process of the models and the contributions of the input features becomes as important as enhancing the model output accuracy.

This study explores the potential of the Explainable AI (XAI) to decode complex deep learning models for ocean surface wind speed estimation trained by the Cyclone GNSS (CYGNSS) observations. The input feature importance is evaluated by applying the SHAP (SHapley Additive exPlanations) Gradient Explainer to the model on an unseen dataset. By analyzing the SHAP value of each input feature, we find that in addition to the two known parameters that are used in the operational wind speed retrieval algorithm, other scientific and technical ancillary parameters, such as the orientation of the satellite and the signal power information are also useful for the model.

We seek to offer a better understanding of the deep learning models for estimating ocean wind speed using GNSS-R data and explore the potential features for more accurate retrieval. In addition to building an efficient model with effective inputs, XAI also helps us to discover the important factors found by models which can enhance the physical understanding of the GNSS-R mechanism.

How to cite: Xiao, T., Asgarimehr, M., Arnold, C., Zhao, D., Mou, L., and Wickert, J.: Explainable AI for GNSS Reflectometry: Investigating Feature Importance for Ocean Wind Speed Estimation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12715, https://doi.org/10.5194/egusphere-egu24-12715, 2024.

15:15–15:25
|
EGU24-10575
|
ECS
|
On-site presentation
Sonia Guessoum, Santiago Belda, José Manuel Ferrándiz, Ahmed Begga, Maria Karbon, Harald Schuh, Sadegh Modiri, and Robert Heinkelmann

Accurate prediction of Earth orientation parameters (EOPs) is critical for astro-geodynamics, high-precision space navigation, and positioning, and deep space exploration. However, the current models' prediction accuracy for EOPs is significantly lower than that of geodetic technical solutions, which can adversely affect certain high-precision real-time users. In this study, we introduce a simultaneous prediction approach for Polar Motion (PM) and Celestial Pole Offsets (CPO) employing deep neural networks, aiming to deliver precise predictions for both parameters.
The methodology comprises three components, with the first being feature interaction and selection. The process of feature selection within the context of deep learning differs from traditional methods for machine learning, and may not be directly applicable to theme since they are designed to automatically learn relevant features. Consequently, we propose in this step a solution based on feature engineering to select the best set of variables that can keep the model as simple as possible but with enough precision and accuracy using recursive feature elimination and the SHAP value algorithm, aiming to investigate the influence of FCN (Free Core Nutation) with its amplitude and phase on the CPO forecasting. This investigation is crucial since FCN is the main source of variance of the CPO series. Considering the role represented by the effective Angular Momentum functions (EAM), and their direct influence on the Earth's rotation, it is logical to assess numerically the impact of EAM on the Polar motion and FCN excitations. SHAP value aids in comprehending how each feature contributes to final predictions, highlighting the significance of each feature relative to others,  and revealing the model's dependency on feature interactions.
During the second phase, we formulate two deep-learning methods for each parameter. The first Neural Network incorporates all features, while the second focuses on the subset of features identified in the initial step. This stage primarily involves exploring feature and hyperparameter tuning to enhance model performance. The SHAP value algorithm is also used in this stage for interpretation. 
In the final phase, we construct a multi-task deep learning model designed to simultaneously predict ( CPO ) and ( PM ).  This model is built using the optimal set of features and hyperparameters identified in the preceding steps. To validate the methodology, we employ the most recent version of the time series from the International Earth Rotation and Reference Systems Service (IERS), namely IERS 20 C04 and EAM provided by the German Research Center for Geosciences (GFZ). We focus on a forecasting horizon of 90 days, the practical forecasting horizon needed in space-geodetic applications.
In the end, we conclude that the developed model is proficient in simultaneously predicting ( CPO ) and ( PM ). The incorporation of ( EAM ), sheds light on its role in CPO excitations and Polar Motion predictions.

How to cite: Guessoum, S., Belda, S., Ferrándiz, J. M., Begga, A., Karbon, M., Schuh, H., Modiri, S., and Heinkelmann, R.:  Feature Selection and Deep Learning for Simultaneous Forecasting of Celestial Pole Offset (CPO) and Polar Motion (PM), EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10575, https://doi.org/10.5194/egusphere-egu24-10575, 2024.

15:25–15:35
|
EGU24-12487
|
ECS
|
Virtual presentation
Betty Heller-Kaikov, Roland Pail, and Martin Werner

One big challenge in the analysis and interpretation of geodetic data is the separation of the individual signal and noise components contained in the data. Specifically, the global, temporal gravity data obtained by the GRACE and GRACE Follow-On satellite missions contain spatial-temporal gravity signals caused by all kinds of mass variations in the Earth’s system. While only the sum of all signals can be measured, for geophysical interpretation, an extraction of individual signal contributions is necessary.

Therefore, our aim is to develop an algorithm solving the signal separation task in global, temporal gravity data. Since the individual signal components are characterized by specific patterns in space and time, the algorithm to be found needs to be able to extract patterns in the 3-dimensional latitude-longitude-time space.

We propose to exploit the pattern recognition abilities of deep neural networks for solving the signal separation task. Our method uses a multi-channel U-Net architecture which is able to translate the sum of various signals as single-channel input to the individual signal components as multi-channel output. The loss function is a weighted sum of the L2 losses of the individual signals.

We perform a supervised training using synthetic data derived from the updated Earth System Model of ESA. The latter consists of separate datasets for temporal gravity variations caused by mass redistribution processes in the atmosphere, the oceans, the continental hydrosphere, the cryosphere and the solid Earth.

In our study, we use different parts of this dataset to form training and test datasets. In this fully-synthetic framework, the ground truth of the individual signal components is also known in the testing stage, allowing a direct computation of the separation errors of the trained separation model.

In our contribution, we present results on optimizing our algorithm by tuning various hyperparameters of the neural network. Moreover, we demonstrate the impact of the number of superimposed signals and the definition of the loss function on the signal separation performance of our algorithm.

How to cite: Heller-Kaikov, B., Pail, R., and Werner, M.: Signal separation in global, temporal gravity data using a multi-channel U-Net, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12487, https://doi.org/10.5194/egusphere-egu24-12487, 2024.

15:35–15:45
|
EGU24-14724
|
ECS
|
On-site presentation
Matthias Aichinger-Rosenberger and Benedikt Soja

Snow is a key variable of the global climate system and the hydrological cycle, as well as one of the most critical sources of freshwater. Therefore, measurements of snow-related parameters such as seasonal snow height (SSH) or snow-water-equivalent (SWE) are of great importance for science, economy and society. Traditionally, these parameters are either measured manually or with automated ground-based sensors, which are accurate, but expensive and suffer from low temporal and spatial resolution.

A new alternative for such systems is the use of GNSS observations, by application of the GNSS interferometric reflectometry (GNSS-IR) method. The technique enables users to infer information about soil moisture, snow depth, or vegetation water content. Signal-to-Noise Ratio (SNR) observations collected by GNSS receivers are sensitive to the interference between the direct signal and the reflected signal (often referred to as “multipath”). The interference pattern changes with the elevation angle of the satellite, the signal wavelength, and the height of the GNSS antenna above the reflecting surface. By comparing this reflector heights estimated for snow surfaces with those from bare soil conditions, snow height can be determined.

The estimation of reflector heights, and respectively SSH, is typically carried out using Lomb-Scargle Periodogram (LSP) spectrum analysis. This study investigates the potential of machine learning methods for this task, using similar input parameters as the standard GNSS-IR retrieval. Results from different supervised algorithms such as Random Forest (RF) or Gradient Boosting (GB) are shown for different GNSS sites and experimental setups. First investigations indicate that snow heights can be successfully obtained with machine learning, with results less noisy than with classical approaches.

How to cite: Aichinger-Rosenberger, M. and Soja, B.: Exploring the performance of machine learning models for the GNSS-IR retrieval of seasonal snow height, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14724, https://doi.org/10.5194/egusphere-egu24-14724, 2024.

Posters on site: Mon, 15 Apr, 10:45–12:30 | Hall X2

Display time: Mon, 15 Apr, 08:30–Mon, 15 Apr, 12:30
Chairperson: Benedikt Soja
X2.11
|
EGU24-16740
|
ECS
|
Alison Seidel, Markus Even, Malte Westerhaus, and Hansjörg Kutterer

Time series of interferometric SAR (InSAR) images offer the potential to detect and monitor surface displacements with high spatial and temporal resolution, even for small and slow deformation processes. Yet, due to the nature of InSAR, the interferometric signal can contain a multitude of contributions. Different displacement source mechanisms could superpose each other, signals that are residuals of atmospheric and topographic effects could not be completely removed during processing of the time series or non-coherent noise could exist. Therefore, the criteria for the selection of temporally stable pixels are often rather strict, leading to significant reduction of the spatial resolution density.

However, to understand the underlying processes of a deformation field, it is important to extract the displacement signals from the data at the best resolution possible and differentiate signals from different source mechanisms. Furthermore, being able to describe the displacement field as superposition of several simple mechanisms is a possible answer to the general question how the information content from tens of thousands of points each coming with a time series over hundreds of acquisitions can be extracted and comprehended.

We address these issues, by determining the dominant displacement signals of different sources in a subset of reliable pixels of InSAR time series datasets with data driven component analysis methods. Subsequently we use models of these signals to identify their displacement patterns in previously not regarded pixels. We utilize the statistical principal component analysis for removing uncorrelated signal contributions and compare different blind source separation methods, such as independent component analysis and independent vector analysis for differentiating between displacements of different origin.

We apply our method to a dataset of multiple orbits of Sentinel-1 InSAR time series from 2015 to 2022 above the gas storage cavern field Epe in NRW, Germany.  Epe displays a complex surface displacement field, consisting of trends caused by cavern convergence, cyclic gas pressure dependent contributions, as well as ground water dependent seasonal displacements. With our approach, we can successfully distinguish the signals of the different source mechanisms and obtain a dense spatial sampling of these signals. Our results show good agreement with geodetic measurements from GNSS and levelling and show a strong correlation to cavern filling levels and groundwater levels, suggesting causal relations.

How to cite: Seidel, A., Even, M., Westerhaus, M., and Kutterer, H.: Signal decomposition of multi-source displacement fields with component analysis methods, applied to InSAR time series of the Epe gas storage cavern field (Germany), EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16740, https://doi.org/10.5194/egusphere-egu24-16740, 2024.

X2.12
|
EGU24-9203
|
ECS
Dung Thi Vu, Adriano Gualandi, Francesco Pintori, Enrico Serpelloni, and Giuseppe Pezzo

Automatic detection and characterization of spatial and temporal features of surface deformation signals associated with anthropogenic activities is a challenging task, with important implications for the evaluation of multi-hazard related to human activities (e.g. earthquakes, subsidence, sea-level rise and flooding), particularly in coastal areas. In this work, we use synthetic Global Navigation Satellite System (GNSS) displacement time-series and apply Deep Learning algorithms (i.e. Convolutional Neural Network (CNN) and Autoencoder) in extracting the time and space features of ground deformation due to natural and anthropogenic processes. We focus on improving three fundamental aspects such as the spatial coverage, the temporal coverage and the accuracy of measurement that come from GNSS technique. The study area is Northern Italy, and particularly the North Adriatic coasts, where gas and oil production sites as well as gas storage sites are present. If in production sites hydrocarbon is constantly extracted during the year, in storage sites the gas/oil is usually injected from April to October and extracted between November and March. Our goals are to understand the effect of hydrocarbon production and extraction/injection process on surface deformation as precisely measured by the dense network of continuous GNSS stations operating in the study area and the ability of CNN-Autoencoder to characterize ground displacements caused by anthropogenic processes. Aims of this work are to identify temporal and spatial patterns in ground deformation time series caused by oil and gas extraction/or gas storage (i.e. extraction and injection); and estimate reservoir parameters (i.e. volumes, depths and extensions). We realize the training dataset by setting up 202 GNSS stations, randomly locating gas/oil reservoirs, which are described by a simple Mogi model, characterized by different depths and temporal evolution of volume changes. The Mogi model, as an approximate spherical shape of a reservoir, displays the ratio of horizontal displacement to vertical displacement due to volume change (i.e. inflating or deflating) and pressure varying with time. The temporal evolution of the volumes of the Mogi sources is simulated by using different parameters associated with several functions namely seasonal, exponential, multi-linear and bell shape. Weighted Principal Component Analysis (WPCA) is used to deal with missing data, which is a common feature in GNSS time series, under an assumption that the weights of missing data are zero. Furthermore, since the CNN-Autoencoder works by analyzing images, the synthetic GNSS time series are interpolated by leveraging the Kriging Interpolation method, which is a Gaussian Process Regression, to obtain the ground displacement in 2D physical space. After calibrating the CNN-Autoencoder model with the synthetic GNSS time series, the model is applied to real data. The code is written in Python and runs on a High-performance computing (HPC) cluster with Graphic Process Unit (GPU) at National Institute of Geophysics and Volcanology (INGV) in Bologna, Italy. 

How to cite: Vu, D. T., Gualandi, A., Pintori, F., Serpelloni, E., and Pezzo, G.: Deep Learning spatio-temporal analysis of anthropogenic ground deformation recorded by GNSS time series in the North Adriatic coasts of Italy , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9203, https://doi.org/10.5194/egusphere-egu24-9203, 2024.

X2.13
|
EGU24-4609
Guanwen Huang, Mengyuan Li, and Le Wang

In PPP, the stochastic model of observation determines the availability and reliability of positioning accuracy, and the observations are usually weighted according to the angle of the GNSS observation, and the smaller the angle of the observation, the more the influence of atmospheric noise and multipath on the observation data increases, and the accuracy of the observations decreases. Based on this, we proposed multi-indicator comprehensive assessment based on grey correlation analysis for observation stochastic modeling of PPP. The position dilution of precision (PDOP), carrier-to-noise density ratio (C/N0) and pseudorange multipath indicators are selected to construct a multi-indicator matrix. Firstly, the indicators are normalized, and then the entropy weight of each assessment indicator is calculated to determine the indicator weight. Meanwhile, after selecting the optimal indicator set, the matrix is constructed to find the grey correlation coefficient and finally the grey correlation degree. According to the above method, the comprehensive assessment results of the quality of satellite observation data for each epoch can be obtained, and the PPP weight array can be established. One-week observations from 243 MGEX stations are selected to conduct GPS-only, Galileo-only and BDS-3-only kinematic PPP, the stochastic model using the highest-elevation and the proposed method is applied, respectively. The results show that, compared with the traditional method, the positioning accuracies and convergence time all can be improved using the proposed method. The positioning accuracies of GPS can be improved by about 4.23%, 8.66%, 5.04% and 5.46% in the east(E), north(N), up(U) and three-dimensional(3D) directions, respectively; 15.96%, 14.25%, 14.72% and 15.01% for Galileo; and 13.53%, 8.42%, 11.65% and 11.40% for BDS-3. The average improvements of convergence time in the east, north and up directions are 5.53%, 7.80% and 5.01% for GPS, BDS-3 and Galileo, respectively.

How to cite: Huang, G., Li, M., and Wang, L.: Multi-Indicator Comprehensive Assessment for Observation Stochastic Model of PPP, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4609, https://doi.org/10.5194/egusphere-egu24-4609, 2024.

X2.14
|
EGU24-11427
|
ECS
Yuanxin Pan, Lucy Icking, Fabian Ruwisch, Steffen Schön, and Benedikt Soja

The reception of non-line-of-sight (NLOS) signals is a prevalent issue for Global Navigation Satellite System (GNSS) applications in urban environments. Such signals can significantly degrade the positioning and navigation accuracy for pedestrians and vehicles. While various methods, such as dual-polarization antennas and 3D building models, have been proposed to identify NLOS signals, they often require additional equipment or impose computational burdens, which limits their practicality. In this study, we introduce a machine learning (ML)-based classifier designed to detect NLOS signals based solely on quality indicators extracted from raw GNSS observations. We examined several input features, including carrier-to-noise density and elevation, and analyzed their relative importance. The effectiveness of our approach was validated using multi-GNSS data collected statically in the city of Hannover. To establish ground truth (i.e., a target) for training and testing the model, we used ray tracing in combination with a 3D building model of Hannover. The developed ML-based classifier achieved an accuracy of approximately 90% for NLOS signal classification. Furthermore, a vehicle-borne data set was used to test the utility of the ML-based signal classifier for kinematic positioning. The performance of the ML-aided positioning solution was compared against a solution without NLOS classification (raw solution) and with the ray-tracing-based classification results (reference solution). It was found that the ML-based solution demonstrated positioning precisions of 0.47 m, 0.55 m and 1.02 in the east, north and up components, respectively. This represents improvements of 64.6%, 33.4% and 36.6% over the raw solution. Additionally, we examined the performance of the ML-based classifier across various urban environments along the vehicle trajectory to gain deeper insights.

How to cite: Pan, Y., Icking, L., Ruwisch, F., Schön, S., and Soja, B.: Non-line-of-sight GNSS Signal Classification for Urban Navigation Using Machine Learning , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11427, https://doi.org/10.5194/egusphere-egu24-11427, 2024.

X2.15
|
EGU24-9543
|
ECS
|
Duo Wang, Lingke Wang, and Hansjörg Kutterer

In recent years, geodesy based on spaceborne microwave remote sensing has gained significant advances. However, whether the observations from the Global Navigation Satellite System (GNSS) or Interferometric Synthetic Aperture Radar (InSAR), the results are inevitably influenced by atmospheric tropospheric delay. Although the tropospheric zenith total delay (ZTD) can be estimated through the gridded meteorological data products and empirical models provided by the ERA5 reanalysis product, its accuracy is still insufficient to meet the needs of modern geodesy. To overcome this challenge, we propose leveraging machine learning techniques to learn local spatio-temporal patterns of tropospheric delay for inferring total zenith delay (ZTD) and zenith wet delay (ZWD) at any location within the learning area.

Our findings indicate that artificial neural networks can establish a robust mapping between ZTD estimated by empirical models and GNSS-measured ZTD. Then employing the ensemble learning strategy and the time series dynamics model, the ZTD at any location within the sample area can be inferred. To evaluate our approach, we conducted tests during the active water vapor season in the Tübingen region of Baden-Württemberg, Germany, from June 25 to July 9, 2022. In comparative experiments with the root mean square error (RMSE) of Zenith Total Delay (ZTD) derived from ERA5, our proposed method yielded a significant reduction in RMSE, decreasing it from 16.4292mm to 7.2108mm. This reflects a remarkable accuracy improvement of 56.11%.

The proposed approach holds promise for enhancing the precision of GNSS positioning, InSAR earth observation, and generating more dependable water vapor products.

How to cite: Wang, D., Wang, L., and Kutterer, H.: Machine learning for atmospheric delay correction in geodesy, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9543, https://doi.org/10.5194/egusphere-egu24-9543, 2024.

X2.16
|
EGU24-12556
|
ECS
Shuyin Mao, Junyang Gou, and Benedikt Soja

High-precision ionospheric prediction is essential for real-time applications of the Global Navigation Satellite System (GNSS), especially for single-frequency receivers. Various machine learning (ML) algorithms have been utilized for ionospheric forecasting and shown great potential. However, previous studies have primarily relied on IGS global ionospheric maps (GIMs) as training data to develop models for global vertical total electron content (VTEC) forecasting. The forecasting accuracy is thereby limited by the input IGS GIMs due to their low spatio-temporal resolution.

Our previous work proposed a neural network-based (NN-based) global ionospheric model. GIMs generated with this approach showcased enhanced accuracy compared with conventional IGS GIMs as we can finely resolve VTEC irregularities. In this study, we benefit from these ML-based GIMs by employing the transfer learning principle to improve the quality of GIM forecasts. The ML-based model for 1-day ahead global VTEC forecasting is first trained based on a series of IGS GIMs from 2004 to 2022. Then, it is fine-tuned using the recent NN-based GIMs from 2020 to 2022. In this context, the model can gain good generalizability from the large dataset of IGS GIMs while having comparable accuracy with NN-based GIMs. Different machine learning approaches, including convolution long short-term memory (ConvLSTM) network and transformer, are implemented and compared. To validate their performance, we perform hindcast studies to compare the 1-day ahead forecasts of our model with satellite altimetry VTEC and conducted single-frequency precise point positioning tests based on the forecast maps.

How to cite: Mao, S., Gou, J., and Soja, B.: An Ionospheric Forecasting Model Based on Transfer Learning Using High-Resolution Global Ionospheric Maps, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12556, https://doi.org/10.5194/egusphere-egu24-12556, 2024.

X2.17
|
EGU24-5743
Sijie Lyu, Yan Xiang, Wenxian Yu, and Benedikt Soja

The precise point positioning-real-time kinematic (PPP-RTK) method achieves fast convergence in global navigation satellite system (GNSS) positioning and navigation. Correcting slant ionospheric delays is crucial for this purpose. The conventional way of obtaining slant ionospheric corrections at the user end involves generating an ionospheric map using a first-order polynomial function or interpolating using methods such as IDW and Kriging. However, with these approaches is challenging to obtain precise and stable ionospheric corrections especially during ionospheric disturbances, potentially degrading the positioning solution even with augmentation. Fortunately, machine learning has the capability to capture complex and non-linear characteristics of diverse data, offering a potential solution to this issue.

In this study, we aim to improve the accuracy of slant ionospheric delay models using machine learning and evaluate them in PPP-RTK. Initially, we extract highly precise slant ionospheric delays from carrier-phase measurements after ambiguity resolution for two regional GNSS networks in Switzerland and the South of China. Then, we employ the Gaussian Process Regressor to interpolate epoch-specific and satellite-specific slant ionospheric delays, utilizing latitude and longitude as features. Two different approaches are tested: the direct interpolation from reference stations and the indirect interpolation from a gridded map. Our results indicate that the accuracy of interpolated ionospheric delays using machine learning is higher than with conventional methods, including IDW and Kriging. Finally, we evaluate PPP-RTK positioning results with ionospheric corrections from the different interpolation methods, revealing that the machine learning method exhibits superiority in both positioning accuracy and convergence time over conventional methods.

How to cite: Lyu, S., Xiang, Y., Yu, W., and Soja, B.: Machine learning-based regional slant ionospheric delay model and its application for PPP-RTK, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5743, https://doi.org/10.5194/egusphere-egu24-5743, 2024.

X2.19
|
EGU24-770
|
ECS
|
Shubhi Kant and Balaji Devaraju

Coastal zones exhibit unique altimetry signal characteristics, primarily influenced by the presence of land artifacts. The shape of the altimetry echo serves as a distinctive marker, representing the physical parameters of the surface it originates from. Open ocean reflections for SAR (Synthetic Aperture Radar) mode yield signals with a steep leading edge and a trailing edge modeled by a negative exponential function. In contrast, land areas in coastal zones typically produce specular and quasi-specular waveforms. The presence of specific waveform classes is further influenced by seasonality and changes in land use and patterns such as coastal erosion.

This study aims to classify altimetry waveforms in coastal zones at various global sites and subsequently retrack the identified waveform classes using an optimal retracking strategy. Site selection is based on the availability of in-situ tide gauge data. Waveform classification is achieved using a Long Short-Term Memory (LSTM) auto-encoder, capturing the temporal nature of waveforms and providing an 8-dimensional feature representation. In addition, the LSTM-autoencoder  provides de-noised waveforms, which are used for subsequent retracking processes.

Different waveform shapes necessitate specific retracking strategies. While an Ocean retracker suffices for SAR waveforms over open oceans, it is inadequate for retracking specular, quasi-specular, and multi-peak waveforms. Advanced retracking algorithms such as OCOG, Threshold, ALES, Beta-5, and Beta-9 are employed based on the waveform class.

To validate the proposed strategy, the performance of the altimetry product, sea level anomalies, and retracking outcomes are compared with established coastal altimetry products like XTRACK, in-situ tide gauge data, and popular retracking algorithms like OCOG, Ocean retracker, Threshold, Beta-5 and Beta-9. Sea level anomalies are derived from sensor geophysical data records (SGDR) of altimetry missions and compared with existing coastal altimetry products and in-situ tide gauge records. Evaluation metrics such as Pearson's correlation coefficient and root mean square error assess the agreement in seasonal and yearly trends, as well as the accuracy of measurements.

This comprehensive analysis aims to validate the effectiveness of the proposed coastal waveform post-processing strategy, showcasing its ability to quantify long-term sea level trends and explore regional variations.

How to cite: Kant, S. and Devaraju, B.: Altimetry Waveform Classification and Retracking Strategy for Improved Coastal Altimetry Products, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-770, https://doi.org/10.5194/egusphere-egu24-770, 2024.

X2.20
|
EGU24-2619
Leonid Petrov and Nlingi Hanaba

    Evaluation of uncertainties of geodetic parameter estimates 
is the problem that is not yet solved in a satisfactory way. 
A direct evaluation of the uncertainties derived from a least 
square solution is labeled "formal" and is usually biased, 
sometimes up to an order of magnitude. Customary, the use of 
formal errors for scientific analysis is discouraged. We claim 
that the root of the problem is neglecting off-diagonal elements 
in the variance-covariance matrix of the noise in the data. 
A careful reconstruction of the full variance-covariance matrix, 
including the off-diagonal terms greatly improves realism of 
uncertainty estimates derived from least squares. We processed 
the dataset of VLBI group delays and built the a priori 
variance-covariance of the atmosphere-driven noise based on 
analysis of the output of NASA high-resolution numerical weather 
models. We found that the uncertainties of parameter estimates 
derived from this least square solution that uses such 
variance-covariance matrices become much closer to realistic 
errors. We consider approaches for for implementation of this 
method in routine data analysis of space geodesy data.

How to cite: Petrov, L. and Hanaba, N.: From formal errors towards realistic uncertainties, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2619, https://doi.org/10.5194/egusphere-egu24-2619, 2024.

X2.21
|
EGU24-16940
Eungyu Park

In geological characterization, the traditional methods that rely on the covariance matrix for continuous variable estimation often either neglect or oversimplify the challenge posed by subsurface non-stationarity. This study presents an innovative methodology using ancillary data such as geological insights and geophysical exploration to address this challenge directly, with the goal of accurately delineating the spatial distribution of subsurface petrophysical properties, especially, in large geological fields where non-stationarity is prevalent. This methodology is based on the geodesic distance on an embedded manifold and is complemented by the level-set curve as a key tool for relating the observed geological structures to intrinsic geological non-stationarity. During validation, parameters ρ and β were revealed to be the critical parameters that influenced the strength and dependence of the estimated spatial variables on secondary data, respectively. Comparative evaluations showed that our approach performed better than a traditional method (i.e., kriging), particularly, in accurately representing the complex and realistic subsurface structures. The proposed method offers improved accuracy, which is essential for high-stakes applications such as contaminant remediation and underground repository design. This study focused primarily on two-dimensional models. There is a need for three-dimensional advancements and evaluations across diverse geological structures. Overall, this research presents novel strategies for estimating non-stationary geologic media, setting the stage for improved exploration of subsurface characterization in the future.

How to cite: Park, E.: Manifold Embedding Based on Geodesic Distance for Non-stationary Subsurface Characterization Using Secondary Information, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16940, https://doi.org/10.5194/egusphere-egu24-16940, 2024.

X2.22
|
EGU24-10154
Jessica Hawthorne

Borehole strainmeters are remarkably precise instruments.  They are often installed to record deformation produced by earthquakes, postseismic slip, and slow earthquakes.  Strainmeters can record such tectonic deformation on timescales of minutes to months with a precision of 0.1 to 1 nanostrain; they record sub-Angstrom changes in borehole width.  

However, the instruments’ high precision also extends to non-tectonic signals.  The borehole width often changes by more than 1 Angstrom when it rains, when atmospheric pressure increases, and when snow loads the ground.  Thus if we want to take full advantage of the instruments and investigate tectonic deformation with high precision, we need to understand and remove the deformation produced by non-tectonic signals like water loading.

So in this study, I present several neural network-based models of hydrologic deformation.  Neural networks are ideal for this modelling as they can accommodate the nonlinearity of the system; 1 cm of rain will cause different deformation if it falls on saturated, winter soil than if it falls on dry, summer soil.  Further, neural networks can take advantage of the abundance of local weather data, including at short timescales.  In my initial modelling, I attempt to reproduce and predict strain as a function of current and past precipitation, atmospheric pressure, wind speed, and temperature.  For simplicity and ease of use, all these parameters are taken from the ECMWF reanalysis models.

I design two neural networks to model the observed strain, using physical intuition to limit the number of free parameters and thus improve the training.  The first network is simple; it creates 10 linear combinations of past rainfall, with exclusively positive weights, and then combines those linear combinations to predict the strain.  The second network also creates 10 linear combinations of past rainfall with positive weights.  But it multiplies those linear combinations of rain by nonlinear functions that could represent the state of the Earth and aquifers.  These nonlinear functions include dependencies on past rainfall, atmospheric pressure, wind speed, and temperature.

These networks train quickly, within a few minutes, and they do a reasonable job of producing the first-order features of the strain.  Both models accommodate more than 50% of the hydrologic signal on timescales of days.  Such modelling may or may not be interesting to hydrologists, but for those interested in tectonic deformation, reproducing and removing 50% of the hydrologic signal means removing 50% of the noise.

It is likely that a better developed and regularised model could remove much more of the noise, and we are continuing to add constraints, initial weights, and training schemes to improve the hydrologic modelling.

How to cite: Hawthorne, J.: Neural network-based hydrology corrections for borehole strainmeters, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10154, https://doi.org/10.5194/egusphere-egu24-10154, 2024.