Displays

NP4.1

This interdisciplinary session welcomes contributions on novel conceptual approaches and methods for the analysis of observational as well as model time series from all geoscientific disciplines.

Methods to be discussed include, but are not limited to:
- linear and nonlinear methods of time series analysis
- time-frequency methods
- predictive approaches
- statistical inference for nonlinear time series
- nonlinear statistical decomposition and related techniques for multivariate and spatio-temporal data
- nonlinear correlation analysis and synchronisation
- surrogate data techniques
- filtering approaches and nonlinear methods of noise reduction
- artificial intelligence and machine learning based analysis and prediction for univariate and multivariate time series

Contributions on methodological developments and applications to problems across all geoscientific disciplines are equally encouraged.

Share:
Co-organized by CL5/EMRP2/ESSI2/HS3
Convener: Reik Donner | Co-conveners: Tommaso AlbertiECSECS, Andrea Toreti
Displays
| Attendance Thu, 07 May, 16:15–18:00 (CEST)

Files for download

Download all presentations (49MB)

Chat time: Thursday, 7 May 2020, 16:15–18:00

D2841 |
EGU2020-1313
| Highlight
Ray Huffaker and Rafael Munoz-Carpena

The complex soil biome is a center piece in providing essential ecosystem services that humans rely on (carbon sequestration, food security, one-health interactions).  Agricultural engineers and soil scientists are developing wireless sensor networks (WSN) that collect large/big data on the soil key state variables (water content, temperature, chemistry) to better understand the soil biome primary environmental drivers. The profession extracts information from WSN records with methods including soil-process modeling and artificial-intelligence (AI) algorithms.  However, these approaches carry their own limitations.  A recent review article faulted current soil-process modeling for inadequately detecting and resolving model structural (abstraction) errors.  AI experts themselves caution against indiscriminant use of AI methods because of: a) problems including replication of past results due to inconsistent experimental methods; b) difficulty in explaining how a particular method arrives at its conclusions (the black box problem) and thus in correcting algorithms that learn ‘bad lessons’; and c) lack of rigorous criteria for selecting AI architectures.  An alternative approach to address these limitations is to investigate new strategies for reducing large/big data problems into smaller, more interpretable causal abstractions of the soil system.  

We develop an innovative data diagnostics framework—based on empirical nonlinear dynamics techniques from physics—that addresses the above concerns over soil-process modeling and AI algorithms.  We diagnose whether WSN and other similar environmental large/big data are likely generated by dimension-reducing (i.e., dissipative) nonlinear dynamics.  An n-dimensional nonlinear dynamic system is dissipative if long-term dynamics are bounded within m<<n dimensions, so that the problem of modeling long-term dynamics shrinks by the n-m inactive degrees of freedom.  If so, long-term system dynamics can be investigated with relatively few degrees of freedom that capture the complexity of the overall system generating observed data.  To make this diagnosis, we first apply signal processing to isolate structured variation (signal) from unstructured variation (noise) in large/big data time series records, and test signals for nonlinear stationarity.  We resolve the structure of isolated signals by distinguishing between stochastic-forcing and deterministic nonlinear dynamics; reconstruct phase space dynamics most likely generating signals, and test the statistical significance of reconstructed dynamics with surrogate data.  If the reconstructed phase space is dimension-reducing, we can formulate low-dimensional (phenomenological) ODE models to investigate nonlinear causal interactions between key soil environmental driving factors.  When we do not diagnose dimension-reducing nonlinear real-world dynamics, then underlying dynamics are most likely high dimensional and the information-extraction problem cannot be shrunk without losing essential dynamic information. In this case, other high-dimensional analysis techniques like AI offer a better modeling alternative for mapping out interactions.  Our framework supplies a decision-support tool for data practitioners toward the most informative and parsimonious information-extraction method—a win-win result.       

We will share preliminary results applying this empirical framework to three soil moisture sensor time series records analyzed with machine learning methods in Bean, Huffaker, and Migliaccio (2018).

How to cite: Huffaker, R. and Munoz-Carpena, R.: A nonlinear dynamics approach to data-enabled science: Reconstructing soil-moisture dynamics from big data collected by wireless sensor networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1313, https://doi.org/10.5194/egusphere-egu2020-1313, 2019

D2842 |
EGU2020-8763
| Highlight
Isabel Serra, David Moriña, Pere Puig, and Álvaro Corral

Intense geomagnetic storms can cause severe damage to electrical systems and communications. this work proposes a counting process with Weibull inter-occurrence times in order to estimate the probability of extreme geomagnetic events. It is found that the scale parameter of the inter-occurrence time distribution grows exponentially with the absolute value of the intensity threshold defining the storm, whereas the shape parameter keeps rather constant. The model is able to forecast the probability of occurrence of an event for a given intensity threshold; in particular, the probability of occurrence on the next decade of an extreme event of a magnitude comparable or larger than the well-known Carrington event of 1859 is explored, and estimated to be between 0.46% and 1.88% (with a 95% confidence), a much lower value than those reported in the existing literature.

How to cite: Serra, I., Moriña, D., Puig, P., and Corral, Á.: Probability estimation of a Carrington-like geomagnetic storm, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8763, https://doi.org/10.5194/egusphere-egu2020-8763, 2020

D2843 |
EGU2020-13714
Zahra Zali, Frank Scherbaum, Matthias Ohrnberger, and Fabrice Cotton

Volcanic tremor is one of the most important signal in volcano seismology because of its potential to be a tool for forecasting eruptions and better understanding of underlying volcanic process. Despite different suggested mechanisms for volcanic tremor generation, the exact process of that is not well understood yet. This signal usually comes along with large number of earthquakes happening during unrest period that affect the shape and amplitude of tremor. A delicate signal processing is required to separate earthquakes and other transient signals from seismic waveform to derive a time series of volcanic tremor which can provide a new insight into tremor source investigations. Exploiting the idea of harmonic and percussive separation in musical signal processing we have developed a method to extract volcanic tremor and transient events from the seismic signal. By using the concept of periodicity as underlying generation process of tremor, we are able to extract the volcanic tremor signal based on the self similarity properties of spectra in time-frequency domain. The separation process results in two spectrograms representing repeating (long-lasting) and non-repeating (short-lived) patterns.

From the spectrogram of the repeating pattern we reconstruct the signal in time domain by adding the original spectrogram’s phase information, thus creating an modified version of the long-lasting tremor signal.

Further, we can derive a characteristic function for transient events by integrating the amplitude of the non-repeating spectrogram in each time frame. This function has non zero value in transient event instances and zero value in time periods devoid of such events. Considering transient events as earthquakes we apply an onset detector to time first arrivals of the transient signal by using the slope of the function. First we determine local maxima of the function showing good correspondence to even the tiniest transient signals. From the peak locations we calculate the slope of each point within a period of 6 seconds preceding each peak. The uncertainty of positive P peaks is up to 0.32 seconds which is equal to the hope size of the calculated spectrogram. The advantage of timing earthquakes through this method is the ability of detecting very low seismic events, although due to the small window size of short time Fourier transform the process is time consuming. The result of this study is promising, while further testing is on-going to validate the method as well as determine applications and limitations.

How to cite: Zali, Z., Scherbaum, F., Ohrnberger, M., and Cotton, F.: Automatic transient signal detection and volcanic tremor extraction using music information retrieval strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13714, https://doi.org/10.5194/egusphere-egu2020-13714, 2020

D2844 |
EGU2020-17092
Alexandre Hippert-Ferrer, Yajing Yan, and Philippe Bolon

Time series analysis constitutes a thriving subject in satellite image derived displacement measurement, especially since the launching of Sentinel satellites which provide free and systematic satellite image acquisitions with extended spatial coverage and reduced revisiting time. Large volumes of satellite images are available for monitoring numerous targets at the Earth’s surface, which allows for significant improvements of the displacement measurement precision by means of advanced multi-temporal methods. However, satellite image derived displacement time series can suffer from missing data, which is mainly due to technical limitations of the ground displacement computation methods (e.g. offset tracking) and surface property changes from one acquisition to another. Missing data can hinder the full exploitation of the displacement time series, which can potentially weaken both knowledge and interpretation of the physical phenomenon under observation. Therefore, an efficient missing data imputation approach seems of particular importance for data completeness. In this work, an iterative method, namely extended Expectation Maximization - Empirical Orthogonal Functions (EM-EOF) is proposed to retrieve missing values in satellite image derived displacement time series. The method uses both spatial and temporal correlations in the displacement time series for reconstruction. For this purpose, the spatio-temporal covariance of the time series is iteratively estimated and decomposed into different EOF modes by solving the eigenvalue problem in an EM-like scheme. To determine the optimal number of EOFs modes, two robust metrics, the cross validation error and a confidence index obtained from eigenvalue uncertainty, are defined. The former metric is also used as a convergence criterion of the iterative update of the missing values. Synthetic simulations are first performed in order to demonstrate the ability of missing data imputation of the extended EM-EOF method in cases of complex displacement, gaps and noise behaviors. Then, the method is applied to time series of offset tracking displacement measurement of Sentinel-2 images acquired between January 2017 and September 2019 over Fox Glacier in the Southern Alps of New Zealand. Promising results confirm the efficiency of the extended EM-EOF method in missing data imputation of satellite image derived displacement time series.

How to cite: Hippert-Ferrer, A., Yan, Y., and Bolon, P.: Spatio-temporal missing data reconstruction in satellite displacement measurement time series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17092, https://doi.org/10.5194/egusphere-egu2020-17092, 2020

D2845 |
EGU2020-3744
Ganesh Ghimire, Navid Jadidoleslam, Witold Krajewski, and Anastasios Tsonis

Streamflow is a dynamical process that integrates water movement in space and time within basin boundaries. The authors characterize the dynamics associated with streamflow time series data from about seventy-one U.S. Geological Survey (USGS) stream-gauge stations in the state of Iowa. They employ a novel approach called visibility graph (VG). It uses the concept of mapping time series into complex networks to investigate the time evolutionary behavior of dynamical system. The authors focus on a simple variant of VG algorithm called horizontal visibility graph (HVG). The tracking of dynamics and hence, the predictability of streamflow processes, are carried out by extracting two key pieces of information called characteristic exponent, λ of degree distribution and global clustering coefficient, GC pertaining to HVG derived network. The authors use these two measures to identify whether streamflow process has its origin in random or chaotic processes. They show that the characterization of streamflow dynamics is sensitive to data attributes. Through a systematic and comprehensive analysis, the authors illustrate that streamflow dynamics characterization is sensitive to the normalization, and the time-scale of streamflow time-series. At daily scale, streamflow at all stations used in the analysis, reveals randomness with strong spatial scale (basin size) dependence. This has implications for predictability of streamflow and floods. The authors demonstrate that dynamics transition through potentially chaotic to randomly correlated process as the averaging time-scale increases. Finally, the temporal trends of λ and GC are statistically significant at about 40% of the total number of stations analyzed. Attributing this trend to factors such as changing climate or land use requires further research.

How to cite: Ghimire, G., Jadidoleslam, N., Krajewski, W., and Tsonis, A.: Inference On Streamflow Predictability Using Horizontal Visibility Graph Based Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3744, https://doi.org/10.5194/egusphere-egu2020-3744, 2020

D2846 |
EGU2020-20335
Laurane Charrier, Yajing Yan, Elise Koeniguer, Emmanuel Trouvé, Romain Millan, Jérémie Mouginot, and Anna Derkacheva

Glacier response to climate change results in natural hazards, sea level rise and changes in freshwater resources. To evaluate this response, glacier surface flow velocity constitutes a crucial parameter to study. Nowadays, more and more velocity maps at regional or global scales issued from satellite SAR and/or optical images tend to be available online or on-demand. Such amount of data requires appropriate data fusion strategies in order to generate displacement time series with improved precision and spatio-temporal coverage. The improved displacement time series can then be used by advanced multi-temporal analysis approaches for further physical interpretations of the phenomenon under observation. In this work, time series of Sentinel-2 (10~m resolution, every 5 days), Landsat-8 (15~m resolution, every 16 days) and Venus (5~m resolution, every 2 days) images acquired between January 2017 and September 2018, over the Fox glacier in the Southern Alps of New Zealand are investigated. Velocities are generated with an offset tracking technique using an automatic processing chain for every possible repeat cycles (2 days-100 days and 300 days to 400 days). Thousands of velocity maps are available, and they are subject to both uncertainty and data gaps. In order to produce a displacement time series as precise/complete as possible , we propose three fusion strategies: 1) use all the available Sentinel-2 displacement maps with different time spans. The goal is to construct a time series of displacement with respect to a common master by means of an inversion 2) take only Sentinel-2 displacement maps with as small time spans as possible, at the same time, keep as much as possible redundancy in the network to be able to construct a common master displacement time series by inversion 3) follow the previous strategy but use all available displacement maps from 3 sensors, with different temporal sampling and measurement precision taken into account. Afterwards, the common master displacement time series will be analysed by a data mining approach in order to extract unusual spatio-temporal patterns in the time series.

How to cite: Charrier, L., Yan, Y., Koeniguer, E., Trouvé, E., Millan, R., Mouginot, J., and Derkacheva, A.: Fusion and mining of glacier surface flow velocity time series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20335, https://doi.org/10.5194/egusphere-egu2020-20335, 2020

D2847 |
EGU2020-2274
Yunzhong Shen, Fengwei Wang, and Qiujie Chen

Since a time series is usually incomplete, the missing data are usually interpolated before employing singular spectrum analysis (SSA). We develop a new SSA for processing incomplete time series based on the property that an original time series can be reproduced from its principal components which are then estimated based on minimum norm criterion. When an incomplete time series is polluted by multiplicative noise, we first convert the multiplicative noise to additive noise by multiplying the signal estimate of the time series, then process the time series with weighted SSA, where the weight factor is determined according to the variance of additive noise, since the converted additive noise is heterogeneous. The proposed SSA approach is employed to process the real incomplete time series data of suspended-sediment concentration from San Francisco Bay compared to the traditional SSA and homomorphic log-transformation SSA approach. The first 10 principal components derived by our proposed SSA approach can capture more of the total variance and with less fitting error than traditional SSA approach and homomorphic log-transformation SSA approach. Furthermore, the results from the simulation cases conform that our proposed SSA outperform both traditional and homomorphic log-transformation SSA approaches.

How to cite: Shen, Y., Wang, F., and Chen, Q.: A new singular spectrum analysis approach for processing incomplete time series polluted by multiplicative noise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2274, https://doi.org/10.5194/egusphere-egu2020-2274, 2020

D2848 |
EGU2020-1030
Adrian Odenweller and Reik Donner

The quantification of synchronization phenomena of extreme events has recently aroused a great deal of interest in various disciplines. Climatological studies therefore commonly draw on spatially embedded climate networks in conjunction with nonlinear time series analysis. Among the multitude of similarity measures available to construct climate networks, Event Synchronization and Event Coincidence Analysis (ECA) stand out as two conceptually and computationally simple nonlinear methods. While ES defines synchrony in a data adaptive local way that does not distinguish between different time scales, ECA requires the selection of a specific time scale for synchrony detection.

Herein, we provide evidence that, due to its parameter-free structure, ES has structural difficulties to disentangle synchrony from serial dependency, whereas ECA is less prone to such biases. We use coupled autoregressive processes to numerically study the sensitivity of results from both methods to changes of coupling and autoregressive parameters. This reveals that ES has difficulties to detect synchronies if events tend to occur temporally clustered, which can be expected from climate time series with extreme events exceeding certain percentiles.

These conceptual concerns are not only reproducible in numerical simulations, but also have implications for real world data. We construct a climate network from satellite-based precipitation data of the Tropical Rainfall Measuring Mission (TRMM) for the Indian Summer Monsoon, thereby reproducing results of previously published studies. We demonstrate that there is an undesirable link between the fraction of events on subsequent days and the degree density at each grid point of the climate network. This indicates that the explanatory power of ES climate networks might be hampered since trivial local properties of the underlying time series significantly predetermine the final network structure, which holds especially true for areas that had previously been reported as important for governing monsoon dynamics at large spatial scales. In contrast, ECA does not appear to be as vulnerable to these biases and additionally allows to trace the spatiotemporal propagation of synchrony in climate networks.

Our analysis rests on corrected versions of both methods that alleviate different normalization problems of the original definitions, which is especially important for short time series. Our finding suggest that careful event detection and diligent preprocessing is recommended when applying ES, while this is less crucial for ECA. Results obtained from ES climate networks therefore need to be interpreted with caution.

How to cite: Odenweller, A. and Donner, R.: Disentangling synchrony from serial dependency in complex climate networks: Comparing Event Synchronization and Event Coincidence Analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1030, https://doi.org/10.5194/egusphere-egu2020-1030, 2019

D2849 |
EGU2020-12938
Reik Donner and Jaqueline Lekscha

Analysing palaeoclimate proxy time series using windowed recurrence network analysis (wRNA) has been shown to provide valuable information on past climate variability. In turn, it has also been found that the robustness of the obtained results differs among proxies from different palaeoclimate archives. To systematically test the suitability of wRNA for studying different types of palaeoclimate proxy time series, we use the framework of forward proxy modelling. For this, we create artificial input time series with different properties and compare the areawise significant anomalies detected using wRNA of the input and the model output time series. Also, taking into account results for general filtering of different time series, we find that the variability of the network transitivity is altered for stochastic input time series while being rather robust for deterministic input. In terms of significant anomalies of the network transitivity, we observe that these anomalies may be missed by proxies from tree and lake archives after the non-linear filtering by the corresponding proxy system models. For proxies from speleothems, we additionally observe falsely identified significant anomalies that are not present in the input time series. Finally, for proxies from ice cores, the wRNA results show the best correspondence with those for the input data. Our results contribute to improve the interpretation of windowed recurrence network analysis results obtained from real-world palaeoclimate time series.

How to cite: Donner, R. and Lekscha, J.: Detecting dynamical anomalies in time series from different palaeoclimate proxy archives using windowed recurrence network analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12938, https://doi.org/10.5194/egusphere-egu2020-12938, 2020

D2850 |
EGU2020-2705
Holger Lange, Michael Hauhs, Katharina Funk, Sebastian Sippel, and Henning Meesenburg

We analyze time series from several forested headwater catchments located adjacent to each other in the Bramke valley, Harz mountains (Germany) which are monitored since decades for hydrology, hydrochemistry and forest growth. The data sets include meteorological variables, runoff rates, streamwater chemical concentrations, and others. The basic temporal resolution is daily for hydrometeorology and two-weekly for streamwater chemistry (in addition, standing biomass of a Norway spruce stand is measured every couple of years).

A model was calibrated and run for the streamflow from one of the catchments, based on precipitation, temperature and (simulated) evapotranspiration of the growing trees, to elucidate the effect of forest growth on catchment hydrology.

The catchments exhibit long-term changes and spatial gradients related to atmospheric deposition, management and changing climate. After providing a short multivariate summary of the dataset, we present several nonlinear metrics suitable to detect and quantify subtle changes and to describe different behavior, both between different variables from the same catchment, as well as for the same variable across catchments. The methods include, but are not limited to: Tarnopolski analysis, permutation entropy and complexity, q- and α-complexities, and Horizontal Visibility Graphs.

The detection of these changes is remarkable, because linear trends have already been removed prior to analysis. Hence, their presence reflects intrinsic changes in the patterns of the time series. The metrics also allow for a detailed model evaluation from a nonlinear perspective.

An important methodological aspect is the temporal resolution of the time series. We investigate the scaling behavior of the nonlinear metrics through aggregation or decimation to coarser resolutions and conclude on what the scaling behavior may imply for inverse (hydrological) modelling tasks.

How to cite: Lange, H., Hauhs, M., Funk, K., Sippel, S., and Meesenburg, H.: Analysis of long-term catchment data: a nonlinear perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2705, https://doi.org/10.5194/egusphere-egu2020-2705, 2020

D2851 |
EGU2020-11492
Quentin Dalaiden, Stephane Vannitsem, and Hugues Goosse

Dynamical dependence between key observables and the surface mass balance (SMB) over Antarctica is analyzed in two historical runs performed with the MPI‐ESM‐P and the CESM1‐CAM5 climate models. The approach used is a novel method allowing for evaluating the rate of information transfer between observables that goes beyond the classical correlation analysis and allows for directional characterization of dependence. It reveals that a large proportion of significant correlations do not lead to dependence. In addition, three coherent results concerning the dependence of SMB emerge from the analysis of both models: (i) The SMB over the Antarctic Plateau is mostly influenced by the surface temperature and sea ice concentration and not by large‐scale circulation changes; (ii) the SMB of the Weddell Sea and the Dronning Maud Land coasts are not influenced significantly by the surface temperature; and (iii) the Weddell Sea coast is not significantly influenced by the sea ice concentration.

How to cite: Dalaiden, Q., Vannitsem, S., and Goosse, H.: Testing for Dynamical Dependence: Application to the Surface Mass Balance Over Antarctica, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11492, https://doi.org/10.5194/egusphere-egu2020-11492, 2020

D2852 |
EGU2020-9270
| Highlight
Andreas Gerhardus and Jakob Runge

Scientific inquiry seeks to understand natural phenomena by understanding their underlying processes, i.e., by identifying cause and effect. In addition to mere scientific curiosity, an understanding of cause and effect relationships is necessary to predict the effect of changing dynamical regimes and for the attribution of extreme events to potential causes. It is thus an important question to ask how, in cases where controlled experiments are not feasible, causation can still be inferred from the statistical dependencies in observed time series.

A central obstacle for such an inference is the potential existence of unobserved causally relevant variables. Arguably, this is more likely to be the case than not, for example unmeasured deep oceanic variables in atmospheric processes. Unobserved variables can act as confounders (meaning they are a common cause of two or more observed variables) and thus introduce spurious, i.e., non-causal dependencies. Despite these complications, the last three decades have seen the development of so-called causal discovery algorithms (an example being FCI by Spirtes et al., 1999) that are often able to identify spurious associations and to distinguish them from genuine causation. This opens the possibility for a data-driven approach to infer cause and effect relationships among climate variables, thereby contributing to a better understanding of Earth's complex climate system.

These methods are, however, not yet well adapted to some specific challenges that climate time series often come with, e.g. strong autocorrelation, time lags and nonlinearities. To close this methodological gap, we generalize the ideas of the recent PCMCI causal discovery algorithm (Runge et al., 2019) to time series where unobserved causally relevant variables may exist (in contrast, PCMCI made the assumption of no confounding). Further, we present preliminary applications to modes of climate variability.

How to cite: Gerhardus, A. and Runge, J.: Causal Discovery for Climate Time Series in the Presence of Unobserved Variables, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9270, https://doi.org/10.5194/egusphere-egu2020-9270, 2020

D2853 |
EGU2020-18778
Mirko Stumpo, Giuseppe Consolini, Tommaso Alberti, and Virgilio Quattrociocchi

The fundamental question what causes what has always been the motivating motto for natural sciences, being the study of causality a crucial point for characterizing dynamical relationships. In the framework of complex dynamical systems, both linear statistical tools and Granger causality models drastically fail to detect causal relationships between time series, while a powerful model-free statistical framework is offered by the information theory. 

Here we discuss how to deal with the problem of measuring causal information in non-stationary complex systems by considering a local estimation of the information-theoretic functionals via an ensemble-based statistics. Then, its application for investigating the dynamical coupling and relationships between the solar wind and the Earth’s magnetosphere is also presented. 

How to cite: Stumpo, M., Consolini, G., Alberti, T., and Quattrociocchi, V.: On the Ensemble Transfer Entropy Analysis of Non-Stationary Geophysical Time Series: The Case of Magnetospheric Response, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18778, https://doi.org/10.5194/egusphere-egu2020-18778, 2020

D2854 |
EGU2020-7063
Antonio Cicone, Angela Stallone, Massimo Materassi, and Haomin Zhou

Nonlinear and nonstationary signals are ubiquitous in real life. Their time–frequency analysis and features extraction can help in solving open problems in many fields of research. Two decades ago, the Empirical Mode Decomposition (EMD) algorithm was introduced to tackle highly nonlinear and nonstationary signals. It consists of a local and adaptive data–driven method which relaxes several limitations of the standard Fourier transform and the wavelet Transform techniques, yielding an accurate time-frequency representation of a signal. Over the years, several variants of the EMD algorithm have been proposed to improve the original technique, such as the Ensemble Empirical Mode Decomposition (EEMD) and the Iterative Filtering (IF).

The versatility of these techniques has opened the door to their application in many applied fields, like geophysics, physics, medicine, and finance. Although the EMD– and IF–based techniques are more suitable than traditional methods for the analysis of nonlinear and nonstationary data, they could easily be misused if their known limitations, together with the assumptions they rely on, are not carefully considered. Here we call attention to some of the pitfalls encountered when implementing these techniques. Specifically, there are three critical factors that are often neglected: boundary effects; presence of spikes in the original signal; signals containing a high degree of stochasticity. We show how an inappropriate implementation of the EMD and IF methods could return an artefact–prone decomposition of the original signal. We conclude with best practice guidelines for researchers who intend to use these techniques for their signal analysis.

How to cite: Cicone, A., Stallone, A., Materassi, M., and Zhou, H.: New insights and best practices for the succesful use of Empirical Mode Decomposition, Iterative Filtering and derived algorithms in the decomposition of nonlinear and nonstationary signals , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7063, https://doi.org/10.5194/egusphere-egu2020-7063, 2020

D2855 |
EGU2020-11461
Mikhail Kanevski, Federico Amato, and Fabian Guignard

The research deals with an application of advanced exploratory tools to study hourly spatio-temporal air pollution data collected by NABEL monitoring network in Switzerland. Data analyzed consist of several pollutants, mainly NO2, O3, PM2.5, measured during last two years at 16 stations distributed over the country. The data are considered in two different ways: 1) as multivariate time series measured at the same station (different pollutants and environmental variables, like temperature), 2) as a spatially distributed time series of the same pollutant. In the first case, it is interesting to study both univariate and multivariate time series and their complexity. In the second case, similarity between time series distributed in space can signify the similar underlying phenomena and environmental conditions giving rise to the pollution. An important aspect of the data is that they are collected at the places of different land use classes – urban, suburban, rural etc., which helps in understanding and interpretation of the results.

Nowadays, unsupervised learning algorithms are widely applied in intelligent exploratory data analysis. Well known tasks of unsupervised learning include manifold learning, dimensionality reduction and clustering. In the present research, intrinsic and fractal dimensions, measures characterizing the similarity and redundancy in data and machine learning clustering algorithms were adapted and applied. The results obtained give a new and important information on the air pollution spatio-temporal patterns. The following results, between others, can be mentioned: 1) some measures of similarity (e.g., complexity-independent distance) are efficient in discriminating between time series; 2) intrinsic dimension, characterizing the ensemble of monitoring data, is pollutant dependent; 3) clustering of time series observed can be interpreted using the available information on land use.  

How to cite: Kanevski, M., Amato, F., and Guignard, F.: Advanced Exploratory Analysis of Air Pollution Multivariate Spatio-Temporal Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11461, https://doi.org/10.5194/egusphere-egu2020-11461, 2020

D2856 |
EGU2020-3374
Carmelo Juez and Estela Nadal-Romero

Sediment transport is the major driver of changes in most catchments systems. Beyond landscape evolution and river geomorphology, sediment dynamics are an important component of a number of physical, chemical and biological processes in river basins. Sediments thus impact the ecology of rivers, sustainability of human infrastructure and basin level fluxes of nutrients and carbon. For this reason, it is important to understand the temporal sediment response of mountain catchments regarding precipitation and run-off. This response is not unique and features intra-annual, annual and multi-year scales components. In this research, we analyse a humid mountain badland area located in the Central Spanish Pyrenees. This typology of badlands is characterized by its non-linearity and non-stationary precipitation and run-off cycles, which ultimately lead to complex sediment dynamics and yields. Based on spectral frequency analysis and wavelet decomposition we were able to determine the dominant time scales of the local hydrological and sediment dynamics. Intra-annual and annual time scales were linked with the climatological characteristics of the catchment site. The multi-year response in the sediment yields reveals the importance of the sediment storage/depletion cycle of the catchment. The frequency and amplitude of precipitation, run-off and sediment yields fluctuations were accurately predicted with the spectral frequency analysis and wavelet decomposition technique used.

How to cite: Juez, C. and Nadal-Romero, E.: Thirteen years of hindsight into hydrological and sediment dynamics of a humid badlands catchment , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3374, https://doi.org/10.5194/egusphere-egu2020-3374, 2020

D2857 |
EGU2020-708
Anastasia Nekrasova and Vladimir Kossobokov

The observed variability of seismic dynamics of the Kamchatka Region is characterized in terms of several moving averages, including (i) seismic rate, (ii) the Benioff strain release, (iii) inter-event time, τ, and (iv) the USLE control parameter, η (where USLE stands for Unified Scaling Law for Earthquakes, i.e. a generalization of the Gutenberg-Richter relationship accounting for naturally fractal distribution of earthquake loci, which states that the distribution of inter-event times τ depends only on the value of variable η).

The variability of seismic dynamics have been evaluated and compared at each of four out of ten separate seismic focal zones of the Kamchatka region and the adjacent areas defined by Levina et al. (2013), i.e., (1) seismic focal zone of the Kuril and South Kamchatka, (2) the northern part of the Kamchatka seismic focal zone, (3) commander segment of the Aleutian arc; and (4) the continental region of Kamchatka. In particular, we considered all magnitude 3.5 or larger earthquakes in 1996-2019 available from open data catalog of the Kamchatka Branch of GS RAS, Earthquakes Catalogue for Kamchatka and the Commander Islands (1962–present) http://sdis.emsd.ru/info/earthquakes/catalogue.ph).

How to cite: Nekrasova, A. and Kossobokov, V.: Unified Scaling Law for Earthquakes: space-time dependent assessment in Kamchatka region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-708, https://doi.org/10.5194/egusphere-egu2020-708, 2019