NP4.1
Analysis of complex geoscientific time series: linear, nonlinear, and computer science perspectives

NP4.1

EDI
Analysis of complex geoscientific time series: linear, nonlinear, and computer science perspectives
Co-organized by BG2/CL5.3/EMRP2/ESSI1/HS13/SM3/ST2
Convener: Reik Donner | Co-conveners: Tommaso Alberti, Giorgia Di Capua
Presentations
| Wed, 25 May, 08:30–11:05 (CEST)
 
Room 0.94/95

Presentations: Wed, 25 May | Room 0.94/95

Chairpersons: Tommaso Alberti, Giorgia Di Capua
08:30–08:35
|
EGU22-91
|
ECS
|
Virtual presentation
Ruby Saha

A complex network provides a robust framework to statistically investigate the topology of local and long-range connections, i.e., teleconnections in climate dynamics. The Climate network is constructed from meteorological data set using the linear Pearson correlation coefficient to measure similarity between two regions. Long-range teleconnections connect remote geographical sites and are crucial for climate networks. In this study, we discuss that during El Ni\~no Southern Oscillation onset, the teleconnections pattern changes according to the episode's strength. The long-range teleconnections are significant and responsible for the episodes' extremum ONI attained gradually after onset. We quantify the betweenness centrality measurement and note that the teleconnection distribution pattern and the betweenness measurements fit well.

How to cite: Saha, R.: The role of teleconnections in complex climate network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-91, https://doi.org/10.5194/egusphere-egu22-91, 2022.

08:35–08:40
|
EGU22-7256
|
ECS
|
On-site presentation
Jakob Schlör, Felix M. Strnad, Christian Fröhlich, and Bedartha Goswami

Representing spatio-temporal climate variables as complex networks allows uncovering nontrivial structure in the data. Although various tools for detecting communities in climate networks have been used to group nodes (spatial locations) with similar climatic conditions, we are often interested in identifying important links between communities. Of particular interest are methods to detect teleconnections, i.e. links over large spatial distances mitigated by atmospheric processes.

We propose to use a recently developed network measure based on Ricci-curvature to visualize teleconnections in climate networks. Ricci-curvature allows to distinguish between- and within-community links in networks. Applied to networks constructed from surface temperature anomalies we show that Ricci-curvature separates spatial scales. We use Ricci-curvature to study differences in global teleconnection patterns of different types of El Niño events, namely the Eastern Pacific (EP) and Central Pacific (CP) types. Our method reveals a global picture of teleconnection patterns, showing confinement of teleconnections to the tropics under EP conditions but showing teleconnections to the tropics, Northern and Southern Hemisphere under CP conditions. The obtained teleconnections corroborate previously reported impacts of EP and CP.
Our results suggest that Ricci-curvature is a promising visual-analytics-tool to study the topology of climate systems with potential applications across observational and model data.

How to cite: Schlör, J., Strnad, F. M., Fröhlich, C., and Goswami, B.: Identifying patterns of teleconnections, a curvature-based network analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7256, https://doi.org/10.5194/egusphere-egu22-7256, 2022.

08:40–08:45
|
EGU22-11667
|
ECS
|
On-site presentation
|
Moritz Haas, Bedartha Goswami, and Ulrike von Luxburg

Network-based analyses of dynamical systems have become increasingly popular in climate science. Instead of focussing on the chaotic systems aspect, we come from a statistical perspective and highlight the often ignored fact that the calculated correlation values are only empirical estimates. We find that already the uncertainty stemming from the estimation procedure has major impact on network characteristics. Using isotropic random fields on the sphere, we observe spurious behaviour in commonly constructed networks from finite samples. When the data has locally coherent correlation structure, even spurious link-bundle teleconnections have to be expected. We reevaluate the outcome and robustness of existing studies based on their design choices and null hypotheses.

How to cite: Haas, M., Goswami, B., and von Luxburg, U.: Spurious Behaviour in Networks from Spatio-temporal Data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11667, https://doi.org/10.5194/egusphere-egu22-11667, 2022.

08:45–08:50
|
EGU22-11118
|
ECS
|
On-site presentation
Merle Kammer, Felix Strnad, and Bedartha Goswami

Climate networks have helped to uncover complex structures in climatic observables from large time series data sets. For instance, climate networks were used to reduce rainfall data to relevant patterns that can be linked to geophysical processes. However, the identification of regions that show similar behavior with respect to the timing and spatial distribution of extreme rainfall events (EREs) remains challenging. 
To address this, we apply a recently developed algorithmic framework based on tangles [1] to discover community structures in the spatial distribution of EREs and to obtain inherently interpretable communities as an output. First, we construct a climate network using time-delayed event synchronization and create a collection of cuts (bipartitions) from the EREs data. By using these cuts, the tangles algorithmic framework allows us to both exploit the climate network structure and incorporate prior knowledge from the data. Applying tangles enables us to create a hierarchical tree representation of communities including the likelihood that spatial locations belong to a community. Each tree layer can be associated to an underlying cut, thus making the division of different communities transparent. 
Applied to global precipitation data, we show that tangles is a promising tool to quantify community structures and to reveal underlying geophysical processes leading to these structures.

 

[1] S. Klepper, C. Elbracht, D. Fioravanti,  J. Kneip, L. Rendsburg, M. Teegen, and U. von Luxburg. Clustering with Tangles: Algorithmic Framework and Theoretical Guarantees. CoRR, abs/2006.14444v2, 2021. URL https://arxiv.org/abs/2006.14444v2.

How to cite: Kammer, M., Strnad, F., and Goswami, B.: Explainable community detection of extreme rainfall events using the tangles algorithmic framework, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11118, https://doi.org/10.5194/egusphere-egu22-11118, 2022.

08:50–08:55
|
EGU22-6014
|
ECS
|
Virtual presentation
|
Hai Zhou, Daniel Schertzer, and Ioulia Tchiguirinskaia

Rainfall time series prediction is crucial for geoscientific system monitoring, but it is challenging and complex due to the extreme variability of rainfall. In order to improve prediction accuracy, a hybrid deep learning model (VMD-RNN) was proposed. In this study, variational mode decomposition (VMD) is first applied to decompose the original rainfall time series into several sub-sequences according to the frequency domain. Following that, different recurrent neural network (RNN) models are utilized to predict individual sub-sequences and the final prediction is reconstructed by summing the prediction results of sub-sequences. These RNN models are long short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM (BiLSTM) and bidirectional GRU (BiGRU), which are optimal for sequence prediction. The root mean square error (RMSE) of the predicted performance is then used to select the ideal RNN model for each sub-sequences. In addition to RMSE, the framework of universal multifractal (UM) is also introduced to evaluate prediction performances, which enables to characterize the extreme variability of predicted rainfall time series. The study employed two rainfall datasets from 2001 to 2020 in Paris, with daily and hourly resolutions. The results show that, when compared to directly predicting the original time series, the proposed hybrid VMD-RNN model improves prediction of high or extreme values for the daily dataset, but does not significantly enhance the prediction of zero or low values. Additionally, the VMD-RNN model also outperforms existing deep learning models without decomposition on the hourly dataset when evaluated with the help of RMSE, while universal multifractal analyses point out limitations. 

How to cite: Zhou, H., Schertzer, D., and Tchiguirinskaia, I.: Combining variational mode decomposition and recurrent neural network to predict rainfall time series and evaluating prediction performance by universal multifractals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6014, https://doi.org/10.5194/egusphere-egu22-6014, 2022.

08:55–09:00
|
EGU22-13053
|
ECS
|
Presentation form not yet defined
Nachiketa Chakraborty and Javier Amezcua

Study of cause and effect relationships – causality - is central to identifying mechanisms that cause the phenomena we observe. And in non-linear, dynamical systems, we wish to understand these mechanisms unfolding over time. In areas within physical sciences like geosciences, astrophysics, etc. there are numerous competing causes that drive the system in complicated ways that are hard to disentangle. Hence, it is important to demonstrate how causal attribution works with relatively simpler systems where we have a physical intuition. Furthermore, in earth and atmospheric sciences or meteorology, we have a plethora of observations that are used in both understanding the underlying science beneath the phenomena as well as forecasting. However in order to do this, optimally combining the models (theoretical/numerical) with the observations through data assimilation is a challenging, computationally intensive task. Therefore, understanding the impact of observations and the required cadence is very useful. Here, we present experiments in causal inference and attribution with the Lorenz 63 system – a system studied for a long time. We first test the causal relations between the variables characterising the model. And then we simulate observations using perturbed versions of the model to test the impact of the cadence of observations of each combination of the 3 variables.

How to cite: Chakraborty, N. and Amezcua, J.: Causal Diagnostics for Observations - Experiments with the L63 system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13053, https://doi.org/10.5194/egusphere-egu22-13053, 2022.

09:00–09:05
|
EGU22-4795
|
ECS
|
On-site presentation
Aditi Kathpalia, Pouya Manshour, and Milan Paluš

To predict and determine the major drivers of climate has become even more important now as climate change poses a big challenge to humankind and our planet earth. Different studies employ either correlation, causality methods or modelling approaches to study the interaction between climate and climate forcing variables (anthropogenic or natural). This includes the study of interaction between global surface temperatures and CO2; rainfall in different locations and El Niño–Southern Oscillation (ENSO) phenomena. The results produced by different studies have been found to be different and debatable, presenting an ambiguous situation. In this work, we develop and apply a novel robust causality estimation technique for time-series data (to estimate causal influence between given observables), that can help to resolve the ambiguity. The discrepancy in existing results arises due to challenges with the acquired data and limitations of the causal inference/ modelling approaches. Our novel approach combines the use of a recently proposed causality method, Compression-Complexity Causality (CCC) [1], and Ordinal/ Permutation pattern-based coding [2]. CCC estimates have been shown to be robust for bivariate systems with low temporal resolution, missing samples, long-term memory and finite length data [1]. The use of ordinal patterns helps to extend bivariate CCC to the multivariate case by capturing the multidimensional dynamics of the given variables’ systems in the symbolic temporal sequence of a single variable. This methodology is tested on dynamical systems data which are short in length and have been corrupted with missing samples or subsampled to different levels. The superior performance of ‘Permutation CCC’ on such data relative to other causality estimation methods, strengthens our trust in the method. We apply the method to study the interaction between CO2-temperature recordings on three different time scales, CH4-temperature on the paleoclimate scale, ENSO-South Asian monsoon on monthly and yearly time scales, North Atlantic Oscillation-surface temperature on daily and monthly time scales. These datasets are either short in length, have been sampled irregularly, have missing samples or have a combination of the above factors. Our results are interesting, which validate some existing studies while contradicting others. In addition, the development of the novel permutation-CCC approach opens the possibility of its application for making useful inferences on other challenging climate datasets.


This study is supported by the Czech Science Foundation, Project No.~GA19-16066S and by the Czech Academy of Sciences, Praemium Academiae awarded to M. Paluš.


References:
[1] Kathpalia, A., & Nagaraj, N. (2019). Data-based intervention approach for Complexity-Causality measure. PeerJ Computer Science, 5, e196.
[2] Bandt, C., & Pompe, B. (2002). Permutation entropy: a natural complexity measure for time series. Physical review letters, 88(17), 174102.

How to cite: Kathpalia, A., Manshour, P., and Paluš, M.: Robust Causal Inference for Irregularly Sampled Time Series: Applications in Climate and Paleoclimate Data Analysis, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4795, https://doi.org/10.5194/egusphere-egu22-4795, 2022.

09:05–09:10
|
EGU22-9626
|
ECS
|
On-site presentation
Tobias Braun, K. Hauke Kraemer, and Norbert Marwan

In the study of nonlinear observational time series, reconstructing the system’s state space represents the basis for many widely-used analyses. From the perspective of dynamical system’s theory, Taken’s theorem states that under benign conditions, the reconstructed state space preserves the most fundamental properties of the real, unknown system’s attractor. Through many applications, time delay embedding (TDE) has established itself as the most popular approach for state space reconstruction1. However, standard TDE cannot account for multiscale properties of the system and many of the more sophisticated approaches either require heuristic choice for a high number of parameters, fail when the signals are corrupted by noise or obstruct analysis due to their very high complexity.

We present a novel semi-automated, recurrence based method for the problem of attractor reconstruction. The proposed method is based on recurrence plots (RPs), a computationally simple yet effective 2D-representation of a univariate time series. In a recent study, the quantification of RPs has been extended by transferring the well-known box-counting algorithm to recurrence analysis2. We build on this novel formalism by introducing another box-counting measure that was originally put forward by B. Mandelbrot, namely succolarity3. Succolarity quantifies how well a fluid can permeate a binary texture4. We employ this measure by flooding a RP with a (fictional) fluid along its diagonals and computing succolarity as a measure of diagonal flow through the RP. Since a non-optimal choice of embedding parameters impedes the formation of diagonal lines in the RP and generally results in spurious patterns that block the fluid, the attractor reconstruction problem can be formulated as a maximization of diagonal recurrence flow.

The proposed state space reconstruction algorithm allows for non-uniform embedding delays to account for multiscale dynamics. It is conceptually and computationally simple and (nearly) parameter-free. Even in presence of moderate to high noise intensity, reliable results are obtained. We compare the method’s performance to existing techniques and showcase its effectiveness in applications to paradigmatic examples and nonlinear geoscientific time series.

 

References:

1 Packard, N. H., Crutchfield, J. P., Farmer, J. D., & Shaw, R. S. (1980). Geometry from a time series. Physical review letters, 45(9), 712.

2 Braun, T., Unni, V. R., Sujith, R. I., Kurths, J., & Marwan, N. (2021). Detection of dynamical regime transitions with lacunarity as a multiscale recurrence quantification measure. Nonlinear Dynamics, 1-19.

3 Mandelbrot, B. B. (1982). The fractal geometry of nature (Vol. 1). New York: WH freeman.

4 de Melo, R. H., & Conci, A. (2013). How succolarity could be used as another fractal measure in image analysis. Telecommunication Systems, 52(3), 1643-1655.

How to cite: Braun, T., Kraemer, K. H., and Marwan, N.: A Recurrence Flow based Approach to Attractor Reconstruction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9626, https://doi.org/10.5194/egusphere-egu22-9626, 2022.

09:10–09:15
|
EGU22-2922
|
ECS
|
On-site presentation
Theertha Kariyathan, Wouter Peters, Julia Marshall, Ana Bastos, and Markus Reichstein

The analysis of long, high-quality time series of atmospheric greenhouse gas measurements helps to quantify their seasonal to interannual variations and impact on global climate. These discrete measurement records contain, however, gaps and at times noisy data, influenced by local fluxes or synoptic scale events, hence appropriate filtering and curve-fitting techniques are often used to smooth and gap-fill the atmospheric time series. Previous studies have shown that there is an inherent uncertainty associated with curve-fitting processes which introduces biases based on the choice of mathematical method used for data processing and can lead to scientific misinterpretation of the signal. Further the uncertainties in curve-fitting can be propagated onto the metrics estimated from the fitted curve that could significantly influence the quantification of the metrics and their interpretations. In this context we present a novel-methodology for constraining the uncertainty arising from fitting a smooth curve to the CO2 dry air mole fraction time-series, and propagate this uncertainty onto commonly used metrics to study the seasonal cycle of CO2. We generate an ensemble of fifitted curves from the data using residual bootstrap sampling with loess-fitted residuals, that is representative of the inherent uncertainty in applying the curve-fitting method to the discrete data. The spread of the selected CO2 seasonal cycle metrics across bootstrap time-series provides an estimate of the inherent uncertainty in curve fitting to the discrete data. Further we show that the approach can be extended to other curve-fitting methods by generating multiple bootstrap samples by resampling residuals obtained from processing the data using the widely used CCGCRV filtering method by the atmospheric greenhouse gas measurement community.

How to cite: Kariyathan, T., Peters, W., Marshall, J., Bastos, A., and Reichstein, M.: Constraining the uncertainty in CO2 seasonal cycle metrics by residual bootstrapping., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2922, https://doi.org/10.5194/egusphere-egu22-2922, 2022.

09:15–09:20
|
EGU22-2097
|
ECS
|
Highlight
|
Virtual presentation
|
Rixu Hao, Yuxin Zhao, Xiong Deng, Di Zhou, Dequan Yang, and Xin Jiang

Sea surface temperature (SST) is a vitally important variable of the global ocean, which can profoundly affect the climate and marine ecosystems. The field of forecasting oceanic variables has traditionally relied on numerical models, which effectively consider the discretization of the dynamical and physical oceanic equations. However, numerical models suffer from many limitations such as short timeliness, complex physical processes, and excessive calculation. Furthermore, existing machine learning has been proved to be able to capture spatial and temporal information independently without these limitations, but the previous research on multi-scale feature extraction and evolutionary forecast under spatiotemporal integration is still inadequate. To fill this gap, a multi-scale spatiotemporal forecast model is developed combining ensemble empirical mode decomposition (EEMD) and spatiotemporal empirical orthogonal function (STEOF) with long short-term memory (LSTM), which is referred to as EEMD-STEOF-LSTM. Specifically, the EEMD is applied for adaptive multi-scale analysis; the STEOF is adopted to decompose the spatiotemporal processes of different scales into terms of a sum of products of spatiotemporal basis functions along with corresponding coefficients, which captures the evolution of spatial and temporal processes simultaneously; and the LSTM is employed to achieve medium- to long-term forecast of STEOF-derived spatiotemporal coefficients. A case study of the daily average of SST in the South China Sea shows that the proposed hybrid EEMD-STEOF-LSTM model consistently outperforms the optimal climatic normal (OCN), STEOF, and STEOF-LSTM, which can accurately forecast the characteristics of oceanic eddies. Statistical analysis of the case study demonstrates that this model has great potential for practical applications in medium- to long-term forecast of oceanic variables.

How to cite: Hao, R., Zhao, Y., Deng, X., Zhou, D., Yang, D., and Jiang, X.: Medium- to long-term forecast of sea surface temperature using EEMD-STEOF-LSTM hybrid model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2097, https://doi.org/10.5194/egusphere-egu22-2097, 2022.

09:20–09:25
|
EGU22-13342
|
On-site presentation
Reik Donner, Tommaso Alberti, and Davide Faranda

An accurate understanding of dynamical similarities and dissimilarities in geomagnetic variability between quiet and disturbed periods has the potential to vastly improve Space Weather diagnosis. During the last years, several approaches rooted in dynamical system theory have demonstrated their great potentials for characterizing the instantaneous level of complexity in geomagnetic activity and solar wind variations, and for revealing indications of intermittent large-scale coupling and generalized synchronization phenomena in the Earth’s electromagnetic environment. In this work, we focus on two complementary approaches based on the concept of recurrences in phase space, both of which quantify subtle geometric properties of the phase space trajectory instead of taking an explicit temporal variability perspective. We first quantify the local (instantaneous) and global fractal dimensions and associated local stability properties of a suite of low (SYM-H, ASY-H) and high latitude (AE, AL, AU) geomagnetic indices and discuss similarities and dissimilarities of the obtained patterns for one year of observations during a solar activity maximum. Subsequently, we proceed with studying bivariate extensions of both approaches, and demonstrate their capability of tracing different levels of interdependency between low and high latitude geomagnetic variability during periods of magnetospheric quiescence and along with perturbations associated with geomagnetic storms and magnetospheric substorms, respectively. Ultimately, we investigate the effect of time scale on the level of dynamical organization of fluctuations by studying iterative reconstructions of the index values based on intrinsic mode functions obtained from univariate and multivariate versions of empirical mode decomposition. Our results open new perspectives on the nonlinear dynamics and (likely intermittent) mutual entanglement of different parts of the geospace electromagnetic environment, including the equatorial and westward auroral electrojets, in dependence of the overall state of the geospace system affected by temporary variations of the solar wind forcing. In addition, they contribute to a better understanding of the potentials and limitations of two contemporary approaches of nonlinear time series analysis in the field of space physics.

How to cite: Donner, R., Alberti, T., and Faranda, D.: Instantaneous fractal dimensions and stability properties of geomagnetic indices based on recurrence networks and extreme value theory, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13342, https://doi.org/10.5194/egusphere-egu22-13342, 2022.

09:25–09:30
|
EGU22-6281
|
On-site presentation
Constantinos Papadimitriou, Georgios Balasis, Ioannis A. Daglis, and Simon Wing

In the past ten years Artificial Neural Networks (ANN) and other machine learning methods have been used in a wide range of models and predictive systems, to capture and even predict the onset and evolution of various types of phenomena. These applications typically require large datasets, composed of many variables and parameters, the number of which can often make the analysis cumbersome and prohibitively time consuming, especially when the interplay of all these parameters is taken into consideration. Thankfully, Information-Theoretical measures can be used to not only reduce the dimensionality of the input space of such a system, but also improve its efficiency. In this work, we present such a case, where differential electron fluxes from the Magnetic Electron Ion Spectrometer (MagEIS) on board the Van Allen Probes satellites are modelled by a simple ANN, using solar wind parameters and geomagnetic activity indices as inputs, and illustrate how the proper use of Information Theory measures can improve the efficiency of the model by minimizing the number of input parameters and shifting them with respect to time, to their proper time-lagged versions.

How to cite: Papadimitriou, C., Balasis, G., Daglis, I. A., and Wing, S.: Application of information theoretical measures for improved machine learning modelling of the outer radiation belt, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6281, https://doi.org/10.5194/egusphere-egu22-6281, 2022.

09:30–09:35
|
EGU22-1831
|
Presentation form not yet defined
|
Shahbaz Chaudhry, Sandra Chapman, Jesper Gjerloev, Ciaran Beggan, and Alan Thompson

Geomagnetic storms can impact technological systems, on the ground and in space, including damage to satellites and power blackouts. Their impact on ground systems such as power grids depends upon the spatio-temporal extent and time-evolution of the ground magnetic perturbation driven by the storm.

Pc waves are Alfven wave resonances of closed magnetospheric field lines and are ubiquitous in the inner magnetosphere. They have been extensively studied, in particular since  Pc wave power tracks the onset and evolution of geomagnetic storms.  We study the spatial and temporal evolution of Pc waves with a network analysis of the 100+ ground-based magnetometer stations collated by the SuperMAG collaboration with a single time-base and calibration. 

Network-based analysis of 1 min cadence SuperMAG magnetometer data has been applied to the dynamics of substorm current systems (Dods et al. JGR 2015, Orr et al. GRL 2019) and the magnetospheric response to IMF turnings (Dods et al. JGR 2017). It has the potential to capture the full spatio-temporal response with a few time-dependent network parameters. Now, with the availability of 1 sec data across the entire SuperMAG network we are able for the first time to apply network analysis globally to resolve both the spatial and temporal correlation patterns of the ground signature of Pc wave activity as a geomagnetic storm evolves. We focus on Pc2 (5-10s period) and Pc3 (10-45s period) wave bands. We obtain the time-varying global Pc wave dynamical network over individual space weather events.

To construct the networks we sample each magnetometer time series with a moving window in the time domain (20 times Pc period range) and then band-pass filter each magnetometer station time-series to obtain Pc2 and Pc3 waveforms. We then compute the cross correlation (TLXC) between all stations for each Pc band. Modelling is used to determine a threshold of significant TLXC above which a pair of stations are connected in the network. The TLXC as a function of lag is tested against a criterion for sinusoidal waveforms and then used to calculate the phase difference. The connections with a TLXC peak at non zero lag form a directed network which characterizes propagation or information flow. The connections at TLXC lag peak close to zero form am undirected network which characterizes a response which is globally instantaneously coherent.

We apply this network analysis to isolated geomagnetic storms. We find that the network connectivity does not simply track Pc wave power, it therefore contains additional information. Geographically short range connections are prevalent at all times, the storm onset marks a transition to a network which has both enhancement of geographically short-range connections, and the growth of geographically long range, global scale, connections extending spatially over a region exceeding 9h MLT. These global scale connections, indicating globally coherent Pc wave response are prevalent throughout the storm with considerable (within a few time windows) variation. The stations are not uniformly distributed spatially. Therefore, we distinguish between long range connections to avoid introducing spatial correlation. 

How to cite: Chaudhry, S., Chapman, S., Gjerloev, J., Beggan, C., and Thompson, A.: Quantifying space-weather events using dynamical network analysis of Pc waves with global ground based magnetometers., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1831, https://doi.org/10.5194/egusphere-egu22-1831, 2022.

09:35–10:00
Coffee break
Chairpersons: Giorgia Di Capua, Tommaso Alberti
10:20–10:25
|
EGU22-8399
|
ECS
|
On-site presentation
Felix Soltau, Sebastian Niehüser, and Jürgen Jensen

Tide gauges are exposed to various kinds of influences that are able to affect water level measurements significantly and lead to time series containing different phenomena and artefacts. These influences can be natural or anthropogenic, while both lead to actual changes of the water level. Opposed to that, technical malfunction of measuring devices as another kind of influence causes non-physical water level data. Both actual and non-physical data need to be detected and classified consistently, and possibly corrected to enable the supply of adequate water level information. However, there is no automatically working detection algorithm yet. Only obvious or frequent technical malfunctions like gaps can be detected automatically but have to be corrected manually by trained staff. Consequently, there is no consistently defined data pre-processing before, for example, statistical analyses are performed or water level information for navigation is passed on.

In the research project DePArT*, we focus on detecting natural phenomena like standing waves, meteotsunamis, or inland flood events as well as anthropogenic artefacts like operating storm surge barriers and sluices in water level time series containing data every minute. Therefore, we train artificial neural networks (ANNs) using water level sequences of phenomena and artefacts as well as redundant data to recognize them in other data sets. We use convolutional neural networks (CNNs) as they already have been successfully conducted in, for example, object detection or speech and language processing (Gu et al., 2018). However, CNNs need to be trained with high numbers of sample sequences. Hence, as a next step the idea is to synthesize rarely observed phenomena and artefacts to gain enough training data. The trained CNNs can then be used to detect unnoticed phenomena and artefacts in past and recent time series. Depending on sequence characteristics and the results of synthesizing, we will possibly be able to detect certain events as they occur and therefore provide pre-checked water level information in real time.

In a later stage of this study, we will implement the developed algorithms in an operational test mode while cooperating closely with the officials to benefit from the mutual feedback. In this way, the study contributes to a future consistent pre-processing and helps to increase the quality of water level data. Moreover, the results are able to reduce uncertainties from the measuring process and improve further calculations based on these data.

* DePArT (Detektion von küstenhydrologischen Phänomenen und Artefakten in minütlichen Tidepegeldaten; engl. Detection of coastal hydrological phenomena and artefacts in minute-by-minute tide gauge data) is a research project, funded by the German Federal Ministry of Education and Research (BMBF) through the project management of Projektträger Jülich PTJ under the grant number 03KIS133.

Gu, Wang, Kuen, Ma, Shahroudy, Shuai, Liu, Wang, Wang, Cai, Chen (2018): Recent advances in convolutional neural networks. In: Pattern Recognition, Vol. 77, Pages 354–377. https://doi.org/10.1016/j.patcog.2017.10.013

How to cite: Soltau, F., Niehüser, S., and Jensen, J.: Using neural networks to detect coastal hydrodynamic phenomena in high-resolution tide gauge data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8399, https://doi.org/10.5194/egusphere-egu22-8399, 2022.

10:25–10:30
|
EGU22-2560
|
On-site presentation
Antonio Cicone

In this presentation, we introduce the IMFogram method ( pronounced like "infogram" ), which is a new, fast, local, and reliable time-frequency representation (TFR) method for nonstationary signals. This technique is based on the Intrinsic Mode Functions (IMFs) decomposition produced by a decomposition method, like the Empirical Mode Decomposition-based techniques, Iterative Filtering-based algorithms, or any equivalent method developed so far. We present the mathematical properties of the IMFogram, and show the proof that this method is a generalization of the Spectrogram. We conclude the presentation with some applications, as well as a comparison of its performance with other existing TFR techniques.

How to cite: Cicone, A.: The IMFogram: a new time-frequency representation algorithm for nonstationary signals, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2560, https://doi.org/10.5194/egusphere-egu22-2560, 2022.

10:30–10:35
|
EGU22-8899
|
Presentation form not yet defined
|
Bahare Imanibadrbani, Hamzeh Mohammadigheymasi, Ahmad Sadidkhouy, Rui Fernandes, Ali Gholami, and Martin Schimmel

Different phases of seismic waves generated by earthquakes carry considerable information about the subsurface structures as they propagate within the earth. Depending on the scope and objective of an investigation, various types of seismic phases are studied. Studying surface waves image shallow and large-scale subsurface features, while body waves provide high-resolution images at higher depths, which is otherwise impossible to be resolved by surface waves. The most challenging aspect of studying body waves is extracting low-amplitude P and S phases predominantly masked by high amplitude and low attenuation surface waves overlapping in time and frequency. Although body waves generally contain higher frequencies than surface waves, the overlapping frequency spectrum of body and surface waves limits the application of elementary signal processing methods such as conventional filtering. Advanced signal processing tools are required to work around this problem. Recently the Sparsity-Promoting Time-Frequency Filtering (SP-TFF) method was developed as a signal processing tool for discriminating between different phases of seismic waves based on their high-resolution polarization information in the Time-Frequency (TF)-domain (Mohammadigheymasi et al., 2022). The SP-TFF extracts different phases of seismic waves by incorporating this information and utilizing a combination of amplitude, directivity, and rectilinearity filters. This study implements SP-TFF by properly defining a filter combination set for specific extraction of body waves masked by high-amplitude surface waves. Synthetic and real data examinations for the source mechanism of the  Mw=7.5 earthquake that occurred in November 2021 in Northern Peru and recorded by 58 stations of the United States National Seismic Network (USNSN) is conducted. The results show the remarkable performance of SP-TFF extracting P and SV phases on the vertical and radial components and SH phase on the transverse component masked by high amplitude Rayleigh and Love waves, respectively. A range of S/N levels is tested, indicating the algorithm’s robustness at different noise levels. This research contributes to the FCT-funded SHAZAM (Ref. PTDC/CTA-GEO/31475/2017) and IDL (Ref. FCT/UIDB/50019/2020) projects. It also uses computational resources provided by C4G (Collaboratory for Geosciences) (Ref. PINFRA/22151/2016).

REFERENCE
Mohammadigheymasi, H., P. Crocker, M. Fathi, E. Almeida, G. Silveira, A. Gholami, and M. Schimmel, 2022, Sparsity-promoting approach to polarization analysis of seismic signals in the time-frequency domain: IEEE Transactions on Geoscience and Remote Sensing, 1–1.

How to cite: Imanibadrbani, B., Mohammadigheymasi, H., Sadidkhouy, A., Fernandes, R., Gholami, A., and Schimmel, M.: Body wave extraction by using sparsity-promoting time-frequency filtering, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8899, https://doi.org/10.5194/egusphere-egu22-8899, 2022.

10:35–10:40
|
EGU22-2014
|
ECS
|
Virtual presentation
|
Zahra Zali, Theresa Rein, Frank Krüger, Matthias Ohrnberger, and Frank Scherbaum

Since the ocean covers 71% of the Earth’s surface, records from ocean bottom seismometers (OBS) are essential for investigating the whole Earth’s structure. However, data from ocean bottom recordings are commonly difficult to analyze due to the high noise level especially on the horizontal components. In addition, signals of seismological interest such as earthquake recordings at teleseismic distances, are masked by the oceanic noises. Therefore, noise reduction of OBS data is an important task required for the analysis of OBS records. Different approaches have been suggested in previous studies to remove noise from vertical components successfully, however, noise reduction on records of horizontal components remained problematic. Here we introduce a method, which is based on harmonic-percussive separation (HPS) algorithms used in Zali et al., (2021) that is able to separate long-lasting narrowband signals from broadband transients in the OBS records. In the context of OBS noise reduction using HPS algorithms, percussive components correspond to earthquake signals and harmonic components correspond to noise signals. OBS noises with narrowband horizontal structures in the short time Fourier transform (STFT) are readily distinguishable from transient, short-duration seismic events with vertical exhibitions in the STFT spectrogram. Through HPS algorithms we try to separate horizontal structures from vertical structures in the STFT spectrograms. Using this method we can reduce OBS noises from both vertical and horizontal components, retrieve clearer broadband earthquake waveforms and increase the earthquake signal to noise ratio. The applicability of the method is checked through tests on synthetic and real data.

How to cite: Zali, Z., Rein, T., Krüger, F., Ohrnberger, M., and Scherbaum, F.: OBS noise reduction using music information retrieval algorithms, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2014, https://doi.org/10.5194/egusphere-egu22-2014, 2022.

10:40–10:45
|
EGU22-11064
|
ECS
|
Virtual presentation
Bálint Kaszás, Tiemo Pedergnana, and George Haller

According to a fundamental axiom of continuum mechanics, material response should be objective, i.e., indifferent to the observer. In the context of geophysical fluid dynamics, fluid-transporting vortices must satisfy this axiom and hence different observers should come to the same conclusion about the location and size of these vortices. As a consequence, only objectively defined extraction methods can provide reliable results for material vortices.

As velocity fields are inherently non-objective, they render most Eulerian flow-feature detection non-objective. To resolve this issue,  we discuss a general decomposition of a velocity field into an objective deformation component and a rigid-body component. We obtain this decomposition as a solution of a physically motivated extremum problem for the closest rigid-body velocity of a general velocity field.

This extremum problem turns out to have a unique,  physically interpretable,  closed-form solution. Subtracting this solution from the velocity field then gives an objective deformation velocity field that is also physically observable. As a consequence, all common Eulerian feature detection schemes, as well as the momentum, energy, vorticity, enstrophy, and helicity of the flow, become objective when computed from the deformation velocity component. We illustrate the use of this deformation velocity field on several velocity data sets.

How to cite: Kaszás, B., Pedergnana, T., and Haller, G.: The Objective Deformation Component of a Velocity Field, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11064, https://doi.org/10.5194/egusphere-egu22-11064, 2022.

10:45–10:50
|
EGU22-12351
|
ECS
|
Presentation form not yet defined
Maria Tsekhmistrenko, Ana Ferreira, Kasra Hosseini, and Thomas Kitching

Data from ocean-bottom seismometers (OBS) are inherently more challenging than their land counterpart because of their noisy environment. Primary and secondary microseismic noises corrupt the recorded time series. Additionally, anthropogenic (e.g., ships) and animal noise (e.g., Whales) contribute to a complex noise that can make it challenging to use traditional filtering methods (e.g., broadband or Gabor filters) to clean and extract information from these seismograms. 

OBS deployments are laborious, expensive, and time-consuming. The data of these deployments are crucial in investigating and covering the "blind spots" where there is a lack of station coverage. It, therefore, becomes vital to remove the noise and retrieve earthquake signals recorded on these seismograms.

We propose analysing and processing such unique and challenging data with Machine Learning (ML), particularly Deep Learning (DL) techniques, where conventional methods fail. We present a variational autoencoder (VAE) architecture to denoise seismic waveforms with the aim to extract more information than previously possible. We argue that, compared to other fields, seismology is well-posed to use ML and DL techniques thanks to massive datasets recorded by seismograms. 

In the first step, we use synthetic seismograms (generated with Instaseis) and white noise to train a deep neural network. We vary the signal-to-noise ratio during training. Such synthetic datasets have two advantages. First, we know the signal and noise (as we have injected the noise ourselves). Second, we can generate large training and validation datasets, one of the prerequisites for high-quality DL models.

Next, we increased the complexity of input data by adding real noise sampled from land and OBS to the synthetic seismograms. Finally, we apply the trained model to real OBS data recorded during the RHUM-RUM experiment.

We present the workflow, the neural network architecture, our training strategy, and the usefulness of our trained models compared to traditional methods.

How to cite: Tsekhmistrenko, M., Ferreira, A., Hosseini, K., and Kitching, T.: VAE4OBS: Denoising ocean bottom seismograms using variational autoencoders, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12351, https://doi.org/10.5194/egusphere-egu22-12351, 2022.

10:50–11:05