EGU2020-7832
https://doi.org/10.5194/egusphere-egu2020-7832
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Optimal processing and unphysical effects in seismic noise correlations

Andreas Fichtner1, Daniel Bowden1, and Laura Ermert2
Andreas Fichtner et al.
  • 1ETH Zurich, Institute of Geophysics, Department of Earth Sciences, Zurich, Switzerland (andreas.fichtner@erdw.ethz.ch)
  • 2University of Oxford, Department of Earth Sciences, Oxford, United Kingdom (laura.ermert@earth.ox.ac.uk)

A wide spectrum of processing schemes is commonly applied during the calculation of seismic noise correlations. This is intended to suppress large-amplitude transient and monochromatic signals, to accelerate convergence of the correlation process, or to modify raw correlations into more plausible approximations of inter-station Green's functions. Many processing schemes, such as one-bit normalisation or various non-linear normalizations, clearly break the linear physics of seismic wave propagation. This naturally raises the question: To what extent are the resulting noise correlations physically meaningful quantities?

In this contribution, we rigorously demonstrate that most commonly applied processing methods introduce an unphysical component into noise correlations. This affects noise correlation amplitudes but also, to a lesser extent, time-dependent phase information. The profound consequences are that most processed correlations cannot be entirely explained by any combination of Earth structure and noise sources, and that inversion results may thus be polluted.

The positive component of our analysis is a new class of processing schemes that are optimal in the sense of (1) completely avoiding the unphysical component, while (2) closely approximating the desirable effects of conventional processing schemes. The optimal schemes can be derived purely on the basis of observed noise, without any knowledge of or assumptions on the nature of noise sources.

In addition to the theoretical analysis, we present illustrative real-data examples from the Irish National Seismic Network and the Lost Hills array in Central California. This includes a quantification of potential artifacts that arise when mapping unphysical traveltime and amplitude variations into images of seismic velocities or attenuation.

How to cite: Fichtner, A., Bowden, D., and Ermert, L.: Optimal processing and unphysical effects in seismic noise correlations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7832, https://doi.org/10.5194/egusphere-egu2020-7832, 2020

Comments on the presentation

AC: Author Comment | CC: Community Comment | Report abuse

Presentation version 1 – uploaded on 29 Apr 2020
  • CC1: Coments from the video chat, Laura Ermert, 05 May 2020

    Q: optimal processing gives the same improvements for 1-year long recordings also?

    A: Yes, the 'improvements' are by themselves independent of the recording length. What exactly the effect is, is hard to predict, and in our experience depends a lot on the dataset.

    Q: What kind of dataset would make almost no difference between optimal and classical processing?
    A: From a theoretical perspective, the biggest differences occur when noise sources change rapidly in time.

     

    Q: if you do this optimal processing, are you able to achieve faster convergence to a useful signal, or is the same amount of data required?

    A: From what we see, convergence is not faster.

     

     

    Q: Hi Andreas, when you say "optimal processing" for the ambient noise, how do you judge one processing workflow is better than another? Is there a criterion to evaluate the accuracy or correctness of a workflow?

    A: I think there is no processing workflow that is from the outset better than another. Since you do not know the exact Green's function, you have no reference anyway. The 'optimal processing' is not better than any other in the sense that it approximates a Green's function more closely. What it does is to remove the unphysical component that you may introduce into the correlation by some processing that you apply.

     

     

  • CC2: Comment on EGU2020-7832, Wen Zhou, 05 May 2020

    Hi Andreas, have you test how large this 'factorisation' affects travel time measurements? I guess the relative error would be larger in the coda part? how does that influence the coda interferometry, dv/v measurement?

  • AC1: Comment on EGU2020-7832, Andreas Fichtner, 05 May 2020

    We have so far not looked at this in detail. The only thing we know so far is that the effect is highly dependent on the dataset. In this regard, the method at least allows you to check if the effect is significant.

  • CC3: Further comments from the video chat, Laura Ermert, 05 May 2020

    Q: Would you test these "improvements" with synthetics?

    A: Apart from the fact that this would be unbelievably expensive, there is actually no need to do this. The whole method is completely data-driven, and we can prove analytically that the optimal schemes in any case remove the unphysical component completely. Sorry that this is all a bit abstract! A manuscript with the whole story is about to be submitted.

     

    Q: How do you go from a given processing to the closest optimal processing? Do you apply a correction?

    A: No, we do not apply a correction. What we do is the following: (1) We apply some regular processing to our data and compute correlations. (2) We compute correlations without any processing at all, though they may not look very useful. (3) Based on 1 and 2, we analyse the effect of the processing, and compute a factor analysis of it. (4) From this factor analysis we directly obtain the optimal processing scheme. (5) We apply the optimal processing scheme to our data. Done!