This interdisciplinary session welcomes contributions on novel conceptual and/or methodological approaches and methods for the analysis and statistical-dynamical modeling of observational as well as model time series from all geoscientific disciplines.
Methods to be discussed include, but are not limited to linear and nonlinear methods of time series analysis. time-frequency methods, statistical inference for nonlinear time series, including empirical inference of causal linkages from multivariate data, nonlinear statistical decomposition and related techniques for multivariate and spatio-temporal data, nonlinear correlation analysis and synchronisation, surrogate data techniques, filtering approaches and nonlinear methods of noise reduction, artificial intelligence and machine learning based analysis and prediction for univariate and multivariate time series.
Contributions on methodological developments and applications to problems across all geoscientific disciplines are equally encouraged. We particularly aim at fostering a transfer of new methodological data analysis and modeling concepts among different fields of the geosciences.
Sub-Session "Mathematical Climatology and Space-time Data Analysis" (Abdel Hannachi, Amro Elfeki, Christian Franzke, Muhammad Latif, Carlos Pires)
The recent progress in mathematical methods to solve various problems in weather & climate nonlinear dynamics and data analysis calls for the need to develop a new session that focus on those methods. Novel and powerful mathematical methods have been developed and used in different subjects of climate. Because those methods are used within specific contexts they go unnoticed most of the time by climate researchers. The proposed new session will provide the opportunity to climate scientists and researchers working on developing mathematical methods for climate to come together and present their findings in a transparent way. This will also be easily accessible to other climate scientists who look for, and are interested in specific methods to solve their problems.
Contributions are encouraged from researchers working on mathematical methods and their application to weather and climate. We particularly welcome contributions on optimization, dimension reduction and data mining, space-time patterns identification, machine learning, statistical prediction modelling, nonlinear methods , Bayesian statistics, and Monte-Carlo Markov Chain (MCMC) methods in stochastic modelling.
vPICO presentations: Thu, 29 Apr
El Niño Southern Oscillation (ENSO) index has been shown as a non-Gaussian and nonlinear stochastic process. Here we assess the statistical significance of non-Gaussianity and non-linearity through the analysis of third-order statistics of El Niño 3.4 index in the period 1870–2018, namely the bicovariance (lagged third-order moments) and bispectrum (its 2D Fourier transform). The analysis of bicovariance reveals a tendency for extreme (weak) ENSO signal in the Boreal Spring to be followed by la Niñas (El Niños) in the forthcoming Boreal Winter, thus contributing for a nonlinear attenuation of the ENSO Spring Predictability Barrier. The bispectrum provides a spectral decomposition of skewness in a similar way of the spectral decomposition of variance. Positive and negative real bispectrum values identify triadic phase synchronizations (at frequencies f1, f2 and f1+f2, mostly in the period range 2–6 years) contributing respectively to extreme El Niños and La Niñas. The known positive ENSO skewness and the main features of the ENSO bicovariance and bispectrum are shown to be well reproduced by fitting a bilinear stochastic model where the influence of non-observed variables is simulated by a delayed multiplicative noise, being able to generate non-Gaussianity and non-linearity. The model shows improved forecasts, with respect to benchmark linear models, up to four trimesters ahead, especially of the amplitude of extreme El Niños. The authors would like to acknowledge MISU (Meteorological Institute at Stockholm University) and the financial support FCT through project UIDB/50019/2020 – IDL and project JPIOCEANS/0001/2019 (ROADMAP: ’The Role of ocean dynamics and Ocean–Atmosphere interactions in Driving cliMAte variations and future Projections of impact–relevant extreme events’).
How to cite: Pires, C. and Hannach, A.: Bicovariance and Bispectrum of ENSO index and its impact in nonlinear predictability, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8439, https://doi.org/10.5194/egusphere-egu21-8439, 2021.
The Madden-Julian Oscillation (MJO) is the dominant mode in the tropical atmosphere on sub-seasonal time scales, with a strong influence on the tropical weather and impacts on higher latitudes. Although it is an in depth-studied phenomenon, its intensification and attenuation mechanisms are not fully understood. The purpose of this communication is to analyse the statistics of MJO events using the Wheeler and Hendon index.
In our framework an MJO event takes place when the amplitude of the index is above a threshold for a certain number of days, depending on the averaging of the signal. With this, we define the maximum amplitude of an event, its duration and its size which is the sum of the amplitudes along the duration of an event.
We then analyse how the statistical properties change under variations in the definition of events. We further explore whether the tails of the event distributions are heavy tailed. As MJO interacts with other phenomena and has impacts on higher latitudes we compare the statistics of the MJO with other atmospheric indices. These statistical analyses may contribute to the knowledge of the intensification and attenuation processes that constitute the basic dynamics of the MJO.
How to cite: Minjares, M., Barreiro, M., and Corral, Á.: Statistical Analysis of Madden-Julian Events Using Time Series Indices., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13469, https://doi.org/10.5194/egusphere-egu21-13469, 2021.
The nature of the climate system is very complex: a network of mutual interactions between ocean and atmosphere lead to a multitude of overlapping geophysical processes. As a consequence, the same process has often a signature on different climate variables but with spatial and temporal shifts. Orthogonal decompositions, such as Canonical Correlation Analysis (CCA), of geophysical data fields allow to filter out common dominant patterns between two different variables by maximizing cross-correlation. In general, however, CCA suffers from (i) the orthogonality constraint, which tends to produce unphysical patterns, and (ii) the use of direct correlations, which leads to signals that are merely shifted in time being considered as distinct patterns.
In this work, we propose an extension of CCA, complex rotated CCA (crCCA), to address both limitations. First, we generate complex signals by using the Hilbert transforms. To reduce the spatial leakage inherent in Hilbert transforms, we extend the time series using the Theta model, thus creating an anti-leakage buffer space. We then perform the orthogonal decomposition in complex space, allowing us to detect out-of-phase signals. Subsequent Varimax rotation removes the orthogonal constraints to allow more geophysically meaningful modes.
We applied crCCA to a pair of variables expected to be coupled: Pacific sea surface temperature and continental precipitation. We show that crCCA successfully captures the temporally and spatially complex modes of (i) seasonal cycle, (ii) canonical ENSO, and (iii) ENSO Modoki, in a compact manner that allows an easy geophysical interpretation. The proposed method has the potential to be useful especially, but not limited to, studies on the prediction of continental precipitation by other climate variables. An implementation of the method is readily available as a Python package.
How to cite: Rieger, N., Corral, A., Turiel, A., and Olmedo, E.: Complex rotated CCA: a method to correlate lagged geophysical variables, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9695, https://doi.org/10.5194/egusphere-egu21-9695, 2021.
Stratospheric variability has become increasingly popular due to its potential impact on the tropospheric circulation. Extreme states of the stratospheric polar vortex have been associated with reoccurring tropospheric weather patterns more than 2-3 weeks after the initial stratospheric signal. Standard linear regression methods used to assess the statistical stratosphere-troposphere connection estimate the distribution's mean effect of a stratospheric variable as a predictor on a tropospheric response variable. However, supplementary information of the impact of extreme stratospheric behavior is hidden in the tails of the distribution, revealing a different behavior than the mean. Therefore, we use quantile regression, a method that enables us to model the complete conditional distribution of the response variable. This presentation explores various quantiles of the conditional distribution to investigate the impact of stratospheric variability on the tropospheric circulation using the ERA5 reanalysis dataset. Comparison between (lagged) linear and (lagged) quantile regression reveals significant differences making the latter method a neat tool that offers valuable information about the statistical connection between the stratosphere and the troposphere.
How to cite: Finke, K. and Hannachi, A.: Exploring the Tropospheric Response to Stratospheric Variability Using Lagged Quantile Regression, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15446, https://doi.org/10.5194/egusphere-egu21-15446, 2021.
Conventional analysis of the large-scale atmospheric variability and teleconnections are obtained using the empirical orthogonal function (EOF) method, which was developed mainly to deal with single fields. With the increase of the amount of observed/simulated large-scale atmospheric data including climate models, e.g., CMIP, there is a need to develop methods with efficient algorithms that enable analysis and comparison/validation of climate model simulations. Here we describe the common EOF method, which finds common patterns of a set of large scale atmospheric fields, and enables comparing several model outputs simultaneously. A step-wise/sequential algorithm is presented, which avoids the difficulty encountered in previous algorithms related to the lack of simultaneous monotonic change of the eigenvalues of all fields. The theory and algorithm are presented, and the application to large-scale teleconnections from various reanalysis products and CMIP6 are discussed.
How to cite: Hannachi, A., Finke, K., and Trendafilov, N.: Common EOFs in atmospheric science and large-scale flow, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12363, https://doi.org/10.5194/egusphere-egu21-12363, 2021.
We present a new method for identifying dominant dynamical regimes underlying the observed mid-latitude atmospheric circulation. The method combines the partitioning of recurrence networks and kernel principal component analysis. It enables the detection of significant regimes of variability in addition to obtaining dynamical variables which can be used for regimes embedding. The method is applied to the analysis of geopotential height anomalies of the mid-latitude atmosphere in the Northern hemisphere for the 1981-present winter season. The identified regimes as well as the set of dynamical variables explain large-scale weather patterns, which are associated, e.g., with severe winters over Eurasia and North America. Pronounced inter-annual signatures are also found in the long-term dynamics of the regimes’ frequencies, which are shown to be closely related to the quasi-biennial oscillation of the tropical stratosphere. The method is presented, and prospects for empirical modeling of the atmosphere circulation regimes, and long-term climate predictability are discussed. The work is supported by the Russian Science Foundation (grant 19-42-04121).
How to cite: Mukhin, D. and Hannachi, A.: Analyzing regimes of mid-latitude atmosphere circulation by novel nonlinear data decomposition method, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3144, https://doi.org/10.5194/egusphere-egu21-3144, 2021.
This presentation discusses two examples of the use of advanced pattern techniques in weather and climate science. Firstly, optimal mode decomposition (OMD) is employed for linear inverse modelling of large-scale atmospheric flow. The OMD technique determines a low-rank approximation to a high-dimensional dynamical system in terms of a linear empirical model; a set of patterns and a system matrix are identified simultaneously by maximising the explained predictive variance. The method is exemplified on a quasi-geostrophic atmospheric model with realistic mean state and variability. Considerable improvements in prediction skill are observed compared to the traditional approach based on principal components or dynamic mode decomposition (DMD). Secondly, nonlinear principal prediction patterns are used for stochastic subgrid-scale modelling. Pairs of predictor-predictand patterns are determined in the space of the resolved variables and the space of the subgrid forcing, respectively, and linked in a predictive manner. The predictor patterns may contain nonlinear functions of state variables. On top of this deterministic subgrid model the predictand patterns are forced stochastically. The approach is demonstrated on the two-scale Lorenz 1996 system.
How to cite: Kwasniok, F.: Advanced pattern techniques in weather and climate science, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15532, https://doi.org/10.5194/egusphere-egu21-15532, 2021.
Over time scales between 10 days and 10-20 years – the macroweather regime – atmospheric fields, including the temperature, respect statistical scale symmetries, such as power-law correlations, that imply the existence of a huge memory in the system that can be exploited for long-term forecasts. The Stochastic Seasonal to Interannual Prediction System (StocSIPS) is a stochastic model that exploits these symmetries to perform long-term forecasts. It models the temperature as the high-frequency limit of the (fractional) energy balance equation (fractional Gaussian noise) which governs radiative equilibrium processes when the relevant equilibrium relaxation processes are power law, rather than exponential. They are obtained when the order of the relaxation equation is fractional rather than integer and they are solved as past value problems rather than initial value problems.
Long-range weather prediction is conventionally an initial value problem that uses the current state of the atmosphere to produce ensemble forecasts. In contrast, StocSIPS predictions for long-memory processes are “past value” problems that use historical data to provide conditional forecasts. Cross-correlations can be used to define teleconnection patterns, and for identifying possible dynamical interactions, but they do not necessarily imply any causation. Using the precise notion of Granger causality, we show that for long-range stochastic temperature forecasts, the cross-correlations are only relevant at the level of the innovations – not temperatures. Extended here to the multivariate case, (m-StocSIPS) produces realistic space-time temperature simulations. Although it has no Granger causality, we are able to reproduce emergent properties including realistic teleconnection networks and El Niño events and indices.
How to cite: Del Rio Amador, L. and Lovejoy, S.: Correlations versus causality in Stochastic Long-range Forecasting as a Past Value Problem, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3747, https://doi.org/10.5194/egusphere-egu21-3747, 2021.
In climatology, correlation maps are often used to study the relationships between one 1D time series and a (spatiotemporal) 2D or even 3D field. However, correlation measures do not necessarily capture causal relationships and similarities in correlation maps obtained from different indices may appear if the set of indices contains correlated variables. Causal discovery tools such as the Peter and Clark – Momentary conditional independence (PCMCI) algorithm can help in disentangling spurious from causal links in both linear and nonlinear frameworks. In the linear case considered in the present work, PCMCI extends standard correlation analysis by removing the confounding effects of autocorrelation, indirect links and common drivers. Combining PCMCI and Causal Effect Networks on a 2D field helps identifying, and subsequently discarding the spurious correlations and thereby allows to retain only the causal links. The resulting visualization technique is referred to as a “causal map”.
In this presentation, we illustrate the application of causal maps in combination with maximum covariance analysis to assess how tropical convection interacts with mid-latitude circulation during boreal summer at different intraseasonal timescales. The obtained causal maps reveal the dominant patterns of interaction and highlight specific mid-latitude regions that are most strongly connected to tropical convection. In general, the identified causal teleconnection patterns are only mildly affected by ENSO variability and the tropical-mid-latitude linkages remain similar under different types of ENSO phases. Still, La Niña strengthens the South Asian monsoon generating a stronger response in the mid-latitudes, while during El Niño periods, the western North Pacific summer monsoon pattern is reinforced. Our study paves the way for a process-based validation of boreal summer teleconnections in (sub-)seasonal forecast models and climate models and therefore provides important clues towards improved sub-seasonal and climate projections.
Reference: G. Di Capua, J. Runge, R.V. Donner, B. van den Hurk, A.G. Turner, R. Vellore, R. Krishnan, D. Coumou: Dominant patterns of interaction between the tropics and mid-latitudes in boreal summer: Causal relationships and the role of time-scales. Weather and Climate Dynamics, 1, 519-539 (2020)
How to cite: Di Capua, G. and Donner, R. V.: Causal maps versus correlation maps: visual analysis of tropical-extratropical atmospheric teleconnections using causal discovery, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13147, https://doi.org/10.5194/egusphere-egu21-13147, 2021.
We propose a novel causal discovery method for large-scale gridded time series datasets. Causal discovery has been applied to study a number of problems in climate research in recent years. Causal discovery can be conducted either among spatially aggregated variables (such as modes of climate variability) or by inferring a climate network where the associations among pairs of grid points are treated as a network. In the latter case, causal methods have to deal with several challenges arising from the high dimensionality of such datasets and the data's spatially and temporally redundant nature.
Our method, called Mapped-PCMCI, aims to overcome some of these challenges. The central idea is based on the assumption that there is a lower-dimensional representation of the causal dependencies among different locations. The method first reconstructs a lower-dimensional spatial representation of the data, then conducts causal discovery utilizing the PCMCI method (Runge. et al. 2019), in that lower-dimensional space, and finally maps causal relations back to the grid level. Using spatiotemporal data generated with the spatially aggregated vector-autoregressive (SAVAR) model (Tibau et al. 2020), we demonstrate that Mapped-PCMCI outperforms state-of-the-art methods in orders of magnitude by utilizing the assumption of a lower-dimensional dependency structure. Mapped-PCMCI can be used to better estimate climate networks and thereby help to understand the climate system from the perspective of complex network theory.
J. Runge, P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996 (2019).
Tibau, X.-A., Reimers, C., Eyring, V., Denzler, J., Reichstein, M., and Runge, J.: Spatiotemporal model for benchmarking causal discovery algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9604, https://doi.org/10.5194/egusphere-egu2020-9604, 2020
How to cite: Tibau Alberdi, X.-A., Gerhardus, A., Eyring, V., Denzler, J., and Runge, J.: Mapped-PCMCI: an algorithm for causal discovery at the grid level, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-5633, https://doi.org/10.5194/egusphere-egu21-5633, 2021.
Impacts of space weather include possible disruption to electrical power systems, aviation, communication systems, and satellite systems. The climate of space weather is modulated by the solar cycle. The overall level of solar activity, and the response at earth, varies within and between successive solar cycles. Quantifying space weather risk requires understanding how the occurrence frequency of events of a given size varies with the strength of each solar cycle.
The auroral electrojet index (AE) is a geomagnetic index which parameterises high latitude geomagnetic response at earth. We consider non-overlapping 1 year samples of AE at different solar cycle phases. We use data-data quantile-quantile plots to identify the 75th quantile as the threshold between two physical components in the cumulative distribution function. The bulk of the distribution lies below the threshold, while above it is the long tail. The magnitude of 75th quantile threshold scales with overall solar cycle activity level. At solar maximum, the 75th quantile relates to events which exceed 160 - 350 nT. We find that above the 75th quantile of observed data records, there exists an underlying functional form for the tail of the cumulative distribution function which does not change from one solar maximum to the next.
Bursts, or excursions above a fixed threshold in the AE index time series, characterise space weather events. We perform the first study of variation in AE burst statistics within and between the last four solar cycles. We will discuss burst statistics for solar cycle maximum, minimum and declining phases. We find that, for bursts above 75th quantile thresholds, the functional form of the burst return period distribution is stable over successive solar maxima. A key result of crossing theory is that time series-averaged burst return period and duration are related to each other via the cumulative distribution function of raw observations. If the overall amplitude of the upcoming solar maximum can be predicted, our results may be used to provide constraints on the upcoming distribution of event return times.
How to cite: Bergin, A., Chapman, S., Moloney, N., and Watkins, N.: Quantifying variation of geomagnetic index empirical distribution and burst statistics across successive solar cycles., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4641, https://doi.org/10.5194/egusphere-egu21-4641, 2021.
Space-weather events known as storms/sub-storms can have severe impacts on technological systems, on the ground and in space, including damage to satellites and power blackouts in severe cases. Quantitative understanding of the highly non-linear magnetospheric system during storms/sub-storms is important as our reliance on space based systems increases. We perform network analysis on the 100+ ground-based magnetometer stations collated by SuperMAG. One of the key geomagnetic responses to space weather events are Pc waves which are oscillations along whole magnetospheric magnetic field lines. Recently SuperMAG has offered the full set of Pc measurements collating magnetometer data globally. High quality Pc wave data has only been available locally across magnetometer chains. However, now with SuperMAG these measurements these are available globally with uniform background calibration and time-base. To fully exploit this data requires a new application of analysis tools, for the first time we apply dynamical network analysis to this data set. Obtaining Pc waves over a range of frequencies allows us to probe multiple time and length scales, likely corresponding to different physical generation mechanisms. We will aim to obtain the global Pc wave dynamical networks over individual space weather events in order to quantify the full spatio-temporal response of the magnetosphere to storms/sub-storms with a few network parameters.
To create the network we first band-pass filter magnetometer time series data into four known frequency intervals. Next the data is time-lagged-cross-correlated (TLXC) for each band ensuring a window at least twice the Pc wave period of interest. We then we use noise surrogates to establish a threshold to filter out insignificant peak TLXC values. For each windowed TLXC we build a peak-classification routine (PCR) to determine whether a signal is wavelike or not to then determine the phase difference. The PCR determines whether a network connection is directed or undirected between two geospatially located magnetometer stations for each time window. If the signal/time-series phase difference is found as non-zero for the TLXC function there is a directed network connection pair, otherwise an undirected network connection pair is formed. We perform the TLXC and PCR for each frequency band and between all magnetometer time-series pairs to obtain four dynamical directed and four dynamical undirected networks. The undirected networks quantify the onset time, and spatial extent, of large-scale coherent Pc wave activity. While directed networks also quantify how Pc wave activity is propagating across the magnetosphere for non-coherent Pc wave activity.
Quantifying the full spatio-temporal response of the magnetosphere across 100s of ground based magnetometers with a few parameters also forms the basis of statistical studies across many events.
How to cite: Chaudhry, S., Chapman, S., and Gjerloev, J.: Quantifying space-weather events using dynamical network analysis of ground based magnetometers, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-5804, https://doi.org/10.5194/egusphere-egu21-5804, 2021.
The Earth’s magnetosphere is characterized by a considerable degree of dynamical complexity resulting from the interaction of different multiscale processes, which can be both directly driven/triggered by changes of the interplanetary medium condition, and due to internal processes of the magnetosphere. This complexity can be characterized by following both “classical” and “new” dynamical system tools. Recent work has demonstrated that recurrence plot based techniques may play a pivotal role in such an assessment.
In this presentation, I will summarize some recent results on applications of recurrence quantification analysis and recurrence network analysis to different geomagnetic indices (Dst, SYM-H, ASY-H, AE) reflecting the variability of the Earth’s electromagnetic environment at different time-scales and magnetic latitudes. In addition, the same techniques are applied to some essential properties of the solar wind which are likely to have a relevant effect on geomagnetic field fluctuations and might serve as triggers of instability leading to geospace magnetic storms and/or magnetospheric substorms. The obtained findings underline that dynamical fluctuations of the geomagnetic field during periods of magnetospheric quiescence and storminess indeed exhibit distinctively different levels of dynamical complexity. Moreover, they provide additional evidence for a time-scale separation in magnetospheric dynamics that is further characterized by employing some multi-scale version of recurrence analysis utilizing a continuous wavelet transform of the signals of interest. The corresponding results can be of potential relevance for the development of improved approaches for space weather modelling and forecasting.
R.V. Donner, V. Stolbova, G. Balasis, J.F. Donges, M. Georgiou, S. Potirakis, J. Kurths: Temporal organization of magnetospheric fluctuations unveiled by recurrence patterns in the Dst index. Chaos, 28, 085716 (2018)
R.V. Donner, G. Balasis, V. Stolbova, M. Georgiou, M. Wiedermann, J. Kurths: Recurrence based quantification of dynamical complexity in the Earth's magnetosphere at geospace storm timescales. Journal of Geophysical Research - Space Physics, 124, 90-108 (2019)
J. Lekscha, R.V. Donner: Areawise significance tests for windowed recurrence network analysis. Proceedings of the Royal Society A, 475 (2228), 20190161 (2019)
T. Alberti, J. Lekscha, G. Consolini, P. De Michelis, R.V. Donner: Disentangling nonlinear geomagnetic variability during magnetic storms and quiescence by timescale dependent recurrence properties. Journal of Space Weather and Space Climate, 10, 25 (2020)
How to cite: Donner, R.: Recurrence-Based Quantification of Multi-Scale Dynamical Complexity in the Earth’s Magnetosphere, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13207, https://doi.org/10.5194/egusphere-egu21-13207, 2021.
Muon flux intensity modulation (MFIM) recognition is a relevant solar-terrestrial physics problem. The MFIM discussed are due to geoeffective solar coronal mass ejections.
The necessary observations are carried out using a computerized muon hodoscope (MH) URAGAN developed by NRNU MEPhI, registering muon fluxes intensity. In the MH, the number of muons falling on its aperture per unit time is counted. MH matrix data time series are formed, in which angular and temporal modulations take place due to MH design features, athmospheric disturbances and noises, the values of which significantly exceed the MFIM values.
The MFIM recognition method based on the mathematical apparatus of indicator matrices (IM) and spatial-temporal filtering is proposed.
The time series of MH matrix data, consisting of a set of Poisson processes corresponding to azimuthal and zenithal elements of MH matrices, are considered.
A reference time span is assigned where MFIM are known to be missing. For it, matrices of estimates of mathematical expectations are calculated and, taking into account the Poisson property, the matrices of reference confidence intervals are calculated. Next, the current time sections are formed, on which the matrices of the current confidence intervals are calculated. Based on the comparison of the matrices of the reference and current confidence intervals, the current matrices of anomalies are formed, which are compared with the specified threshold matrix. Thresholds exceedings correspond to anomalous events. Binary IM are formed: ones correspond to anomalous events, zeros correspond to the absence of anomalies. Recognition is to analyze IM sequence and identify areas of non-zero elements condensation that lead to the conclusion that there are significant MFIM. To reduce the recognition errors, the space-time IM filtering has been developed.
MFIM recognition technique, based on the use of IM time series with spatial-temporal filtering has been tested on model and experimental MH data.
Testing on the generated time series of model Poisson MH matrix data with model MFIM confirmed the conclusion about the possibility of MFIM recognition by the proposed method with a decrease level of 3-4%. Application of spatial-temporal filtering made it possible to recognize MFIM with decreases with a level half as much.
Testing on the formed experimental matrix MH data time series with model MFIM led to a conclusion that it is possible to recognize MFIM with the magnitudes of decreases almost commensurate with the decreases for the case of model MH data.
The proposed MFIM recognition method based on indicator matrices for MH observation data allows optimization of parameters and can be successfully applied to solve problems of MFIM recognition and early diagnostics of geomagnetic storms.
This work was funded by the Russian Science Foundation (project No.17-17-01215).
How to cite: Sidorov, R., Getmanov, V., Chinkin, V., Gvishiani, A., Dobrovolsky, M., Soloviev, A., Tsibizov, L., Dmitrieva, A., Kovylyaeva, A., Osetrova, N., and Yashin, I.: A method for muon flux intensity modulations recognition using the indicator matrices for the URAGAN hodoscope matrix data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-303, https://doi.org/10.5194/egusphere-egu21-303, 2021.
Accurate and fast localisation of microseismic events is a requirement for a number of applications, e.g. mining, enhanced geothermal systems. New methods for event localisation have been proposed over the last decades. The waveform-based methods are of the most recent developed ones and their main advantage is the ability to locate weak seismic events. Despite this, these methods are demanding in terms of computational time, making real-time seismic event localisation very difficult. In this work, we further develop a waveform-based method, the Multichannel coherency migration method (MCM), to improve the computational time. The computational time for the MCM algorithm has been reported to linearly depend on several parameters, such as the number of stations, the length of the waveform time window, the computer architecture, and the volume of the area we are searching for the hypocentre. To minimise the computational time we need to decrease one or more of the above parameters without compromising the accuracy of the result. We break the localisation procedure into several steps: (1) we locate the event with a relatively large spatial grid interval which will give less potential hypocentral locations and less calculations as a result. (2) Based on the results of step (1) and the locations of maximum coherencies we decrease the grid volume to a quarter of the original volume and the spatial interval to half the original, focusing only around the area identified in step (1). Step (2) is repeated several times for decreased grid volumes and spatial intervals until the hypocentral location does not significantly change any more. We tested this approach on both synthetic and real data. We find that while the accuracy of the hypocentre is not compromised, the computational time is up to 125,000 times shorter.
How to cite: Parastatidis, E., Pytharouli, S., Stankovic, L., Stankovic, V., and Shi, P.: Minimising the computational time of a waveform based location algorithm, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15664, https://doi.org/10.5194/egusphere-egu21-15664, 2021.
Over the last years, installations of wind turbines (WTs) increased worldwide. Owing to
negative effects on humans, WTs are often installed in areas with low population density.
Because of low anthropogenic noise, these areas are also well suited for sites of
seismological stations. As a consequence, WTs are often installed in the same areas as
seismological stations. By comparing the noise in recorded data before and after
installation of WTs, seismologists noticed a substantial worsening of station quality leading
to conflicts between the operators of WTs and earthquake services.
In this study, we compare different techniques to reduce or eliminate the disturbing signal
from WTs at seismological stations. For this purpose, we selected a seismological station
that shows a significant correlation between the power spectral density and the hourly
windspeed measurements. Usually, spectral filtering is used to suppress noise in seismic
data processing. However, this approach is not effective when noise and signal have
overlapping frequency bands which is the case for WT noise. As a first method, we applied
the continuous wavelet transform (CWT) on our data to obtain a time-scale representation.
From this representation, we estimated a noise threshold function (Langston & Mousavi,
2019) either from noise before the theoretical P-arrival (pre-noise) or using a noise signal
from the past with similar ground velocity conditions at the surrounding WTs. Therefore, we
installed low cost seismometers at the surrounding WTs to find similar signals at each WT.
From these similar signals, we obtain a noise model at the seismological station, which is
used to estimate the threshold function. As a second method, we used a denoising
autoencoder (DAE) that learns mapping functions to distinguish between noise and signal
(Zhu et al., 2019).
In our tests, the threshold function performs well when the event is visible in the raw or
spectral filtered data, but it fails when WT noise dominates and the event is hidden. In
these cases, the DAE removes the WT noise from the data. However, the DAE must be
trained with typical noise samples and high signal-to-noise ratio events to distinguish
between signal and interfering noise. Using the threshold function and pre-noise can be
applied immediately on real-time data and has a low computational cost. Using a noise
model from our prerecorded database at the seismological station does not improve the
result and it is more time consuming to find similar ground velocity conditions at the
How to cite: Heuel, J. and Friederich, W.: Wind Turbine Noise Reduction from Seismological Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4907, https://doi.org/10.5194/egusphere-egu21-4907, 2021.
Long-term Earth observation (EO) time series are an inevitable source for past quantification and analysis as well as future forecasting of land surface dynamics. This study investigates the joint use of geoscientific time series over the last two decades, including EO-based MODIS vegetation indices, DLR Global WaterPack, DLR Global SnowPack, and DLR World Settlement Footprint as well as further climate and hydrological variables to quantify and evaluate land surface changes and their potential drivers.
For this purpose, we focus on the Indus-Ganges-Brahmaputra-Meghna (IGBM) river basin in South Asia, being the most populated and one of the most diverse river basins worldwide. In detail, it is characterized by multiple climate zones, including arid climate in the west, polar climate in the north, and tropical climate in the south east. Moreover, the northern areas of these river basins are shaped by the Himalayan mountain range, also known as the water tower of Asia, whereas the downstream areas are characterized by fertile soils and intensive agriculture in the Indo-Gangetic Plain, being dominated by extreme rainfalls during southwest summer monsoon. Here, the availability of water is of paramount importance in social, economic, as well as political terms, but threatened by climate change as well as anthropogenic pressure.
To enhance the understanding of land surface processes in the IGBM river basin, we apply state-of-the-art time series analysis techniques, including quantification and evaluation of trends and changepoints. Furthermore, we use partial correlation and a causal discovery approach to explore driving factors of land surface change. Changes and patterns are investigated with respect to the prevailing seasons over the study area. Methods were implemented with focus on spatial and temporal transferability to enable further large-scale analysis in the future. Initial results covering the last two decades over the IGBM river basin indicate an increase in greening of vegetation, mostly in areas dominated by croplands. Considering snow cover extent, we observed a decline over the Eastern Himalayas and an increase over the Western Himalayas. Moreover, changes of surface water extent are mixed over the river basin, with negative trends along the Brahmaputra and Ganges rivers and positive trends close to the Bay of Bengal. In addition, preliminary results considering linkages between EO and climate variables reveal strong partial correlation between vegetation and precipitation in western areas, whereas temperature is the dominating climate factor over eastern areas of the IGBM river basin.
How to cite: Uereyen, S., Bachofer, F., Huth, J., and Kuenzer, C.: Synergetic analyses of Earth observation time series on land surface dynamics in large river basins, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12385, https://doi.org/10.5194/egusphere-egu21-12385, 2021.
It is well known that the wide range of spatial and temporal scales present in geophysical flow problems represents a (currently) insurmountable computational bottleneck, which must be circumvented by a coarse-graining procedure. The effect of the unresolved fluid motions enters the coarse-grained equations as an unclosed forcing term, denoted as the ’eddy forcing’. Traditionally, the system is closed by approximate deterministic closure models, i.e. so-called parameterizations. Instead of creating a deterministic parameterization, some recent efforts have focused on creating a stochastic, data-driven surrogate model for the eddy forcing from a (limited) set of reference data, with the goal of accurately capturing the long-term flow statistics. Since the eddy forcing is a dynamically evolving field, a surrogate should be able to mimic the complex spatial patterns displayed by the eddy forcing. Rather than creating such a (fully data-driven) surrogate, we propose to precede the surrogate construction step by a procedure that replaces the eddy forcing with a new source term which: i) is tailor-made to capture spatially-integrated quantities of interest, ii) strikes a balance between physical insight and data-driven modelling , and iii) significantly reduces the amount of training data that is needed. Instead of creating a surrogate model for an evolving field, we now only require a surrogate model for one scalar time series per quantity-of-interest. We derive the new source terms for a simplified an ocean model of two-dimensional turbulence in a doubly periodic square domain, and show that the time-series training data produces the same statistics for our quantities of interest as the full-field eddy-forcing.
How to cite: Edeling, W. and Crommelin, D.: Reducing full-field training data to time series, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-468, https://doi.org/10.5194/egusphere-egu21-468, 2021.
Global mean surface temperature is a fundamental measure for climate evolution in both past and present and a key quantity to evaluate climate simulations. However, for paleoclimate periods, its calculation hinges on proxy data distributed sparsely and inhomogeneously in both space and time. Thus, large sets of different proxy records need to be combined in order to obtain global mean temperature reconstructions, but there is no widely accepted method to perform this task. Building on the work of , we suggest and evaluate an algorithm to reconstruct spatially averaged surface temperatures on centennial to orbital timescales. As the most abundant archive for continuous temperature reconstructions, we focus on marine sediment records as input data. Our implementation is applicable to any compilation of sea-surface temperature reconstructions and capable of calculating global, hemispherical and regional temperature. Major steps of the reconstruction algorithm are interpolation to a common timescale, zonal normalization and calculation of spatially weighted sums, including uncertainty propagation via Monte Carlo methods. We assess the applicability of the algorithm by employing it to the PalMod130k marine palaeoclimate data synthesis  and to pseudo-proxy data generated from transient simulations of the last glacial cycle. Our results suggest that the algorithm is capable of calculating average temperatures mostly consistent with expectations, however capturing centennial-scale variability is limited due to the low spatio-temporal distribution of the input data. This underlines the importance of both increasing the amount, resolution and age control of proxy data as well as extending the algorithm such that it also incorporates other types of paleoclimate archives.
 C. W. Snyder, “Evolution of global temperature over the past two million years,” Nature, vol. 538, no. 7624, pp. 226–228, 2016
 L. Jonkers, O. Cartapanis, M. Langner, N. McKay, S. Mulitza, A. Strack, and M. Kucera, “Integrating palaeoclimate time series with rich metadata for uncertainty modelling: Strategy and documentation of the PALMOD 130k marine palaeoclimate data synthesis,” Earth System Science Data, vol. 12, no. 2, pp. 1053–1081, 2020
How to cite: May, M., Weitzel, N., Jonkers, L., and Rehfeld, K.: Evaluating a method for reconstruction of global, zonal and regional mean temperatures from sparse proxy data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4361, https://doi.org/10.5194/egusphere-egu21-4361, 2021.
Traditionally, the biogeochemical information preserved in the rock record has been used to study the environmental conditions of Earth’s past. There is however another important record of Earth’s history that is only just beginning to be explored: the genomes of contemporary organisms (i.e. the genetic record). The genetic record is an under-utilized tool for studying Earth History. Like the rock record, the genomes of microorganisms have been imprinted with information regarding our changing planet. In this presentation, we will describe a framework for accessing and interpreting the “genetic scars” imprinted on the genomes of microorganisms to identify the timing of the Great Oxidation Event (GOE) independent of the geochemical record. This approach combines ideas from systems biology and data science to infer the timing of major changes in the evolution of microbial lineages and metabolic pathways. Briefly, a horizontal gene transfer constrained molecular clock provides timeline for major speciation events within the bacterial tree of life which can be used to date the emergence of specific protein families related to oxygenic photosynthesis and oxygen consumption. A feature selection algorithm for metabolic networks allows us to generalise this technique beyond the GOE and will enable us to better interpret isotope anomalies in the geochemical record.
How to cite: Magnabosco, C.: The Great Oxidation Event can be detected and dated through the genetic record, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6622, https://doi.org/10.5194/egusphere-egu21-6622, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.