EGU General Assembly 2020
© Author(s) 2021. This work is distributed under
the Creative Commons Attribution 4.0 License.

Temporal Multi-Looking of SAR Image Series for Glacier Velocity Determination and Speckle Reduction

Silvan Leinss, Shiyi Li, Philipp Bernhard, and Othmar Frey
Silvan Leinss et al.
  • ETH Zürich, Institut für Umweltingenieurwissenschaften (IFU), BAUG, Zürich, Switzerland (

The velocity of glaciers is commonly derived by offset tracking using pairwise cross correlation or feature matching of either optical or synthetic aperture radar (SAR) images.  SAR images, however, are inherently affected by noise-like radar speckle and require therefore much larger images patches for successful tracking compared to the patch size used with optical data. As a consequence, glacier velocity maps based on SAR offset tracking have a relatively low resolution compared to the nominal resolution of SAR sensors. Moreover, tracking may fail because small features on the glacier surface cannot be detected due to radar speckle. Although radar speckle can be reduced by applying spatial low-pass filters (e.g. 5x5 boxcar), the spatial smoothing reduces the image resolution roughly by an order of magnitude which strongly reduces the tracking precision. Furthermore, it blurs out small features on the glacier surface, and therefore tracking can also fail unless clear features like large crevasses are visible.

In order to create high resolution velocity maps from SAR images and to generate speckle-free radar images of glaciers, we present a new method that derives the glacier surface velocity field by correlating temporally averaged sub-stacks of a series of SAR images. The key feature of the method is to warp every pixel in each SAR image according to its temporally increasing offset with respect to a reference date. The offset is determined by the glacier velocity which is obtained by maximizing the cross-correlation between the averages of two sub-stacks. Currently, we need to assume that the surface velocity is constant during the acquisition period of the image series but this assumption can be relaxed to a certain extend.

As the method combines the information of multiple images, radar speckle are highly suppressed by temporal multi-looking, therefore the signal-to-noise ratio of the cross-correlation is significantly improved. We found that the method outperforms the pair-wise cross-correlation method for velocity estimation in terms of both the coverage and the resolution of the velocity field. At the same time, very high resolution radar images are obtained and reveal features that are otherwise hidden in radar speckle.

As the reference date, to which the sub-stacks are averaged, can be arbitrarily chosen a smooth flow animation of the glacier surface can be generated based on a limited number of SAR images. The presented method could build a basis for a new generation of tracking methods as the method is excellently suited to exploit the large number of emerging free and globally available high resolution SAR image time series.

How to cite: Leinss, S., Li, S., Bernhard, P., and Frey, O.: Temporal Multi-Looking of SAR Image Series for Glacier Velocity Determination and Speckle Reduction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3643,, 2020.

Display materials

Display file

Comments on the display material

AC: Author Comment | CC: Community Comment | Report abuse

Display material version 2 – uploaded on 06 May 2020
General improvements of the presentation.
  • AC1: Welcome, Silvan Leinss, 06 May 2020

    Dear all, I'm presenting a method motivated by image stabilization in video processing to generate high resolution velocity maps of glaciers from SAR images. Compared to pair-wise cross correlation offset tracking we can reduce the template size from 100x100 px to about 30x30 px which improves the spatial resolution of velocity maps. At the same time, we can generate almost speckle-free SAR image animations at arbitrarily high frame rate from a limited set of SAR images. Check out the video [link: ], but don't miss the presentation.

  • CC1: Comment on EGU2020-3643, Daniel Farinotti, 07 May 2020

    Hi Silvan, just watched the clip you posted in the presentation. The SAR sequence with motion compensation is ABSOLUTELY STUNNING!!! For how many glaciers would you be able to produce such a result? I.e. is your approach easily scalable? Well, you know where to reach me for discussing this further! :-) Great work! Daniel

    • AC2: Reply to CC1, Silvan Leinss, 07 May 2020

      Dear Daniel, 

      what I need is a sequence of about ~10-15 SAR images within a time period for which the glacier surface does not change significantly, i.e. mostly one winter season (5 months / revisit time (TDM: 11 days, S1: 12 days) = 12-14 images). For TSX/TanDEM-X (~2m resolution) such data series are available for multiple test sites world wide. For Sentinel-1(~10m resolution), such data series are available globally! Required processing: Interferometric coregistration + my method. (ps.: credit for editing my time-lapse movies goes to Shiyi Li, co-author.)

      • AC3: Reply to AC2, Silvan Leinss, 07 May 2020

        The animation I presented is based on ~120 images acquired within 8 years. However, the method allows extreme temporal extrapolation by integration of the velocity field. I generated a movie based on 11 images of a single year and extrapolated the animation to +-3 years(!) beyond the acquisition period:

  • AC4: Answer to Live Chat Comment of Laurane Charrier USMB/ONERA, Silvan Leinss, 07 May 2020

    Laurane Charrier: "1) In what extent do you think that your approach can be applied to a larger number of image pairs? 2) In what extent is it time consuming in comparison withpair-wise cross-correlation?"

    The method is very flexible in terms averaging: Both, input SAR images can be averaged (as I did) but also the cross-correlation of multiple sub-stack-pairs can be averaged (e.g. average each winter season's cross correlation field from the stack-pair). Pushing this idea further means "why do we need sub-stack averaging if we could directly average arbitrarily large stacks of cross-correlation functions (See contribution of Shiyi Li this morning (D2575 ""). He is following this idea in his PhD thesis)". We develope together both aspects and have not found significant differences in computational time. 


  • CC2: Comment on EGU2020-3643, Matt Trevers, 07 May 2020

    Hi Silvan,

    Great presentation (easily understandable for a non-remote sensor) and I really liked the movies.

    I'm presuming that you sample a range of vtest from 0 -> vmax, in both x and y, and carry out the stacking and cross-correlation procedure for every combination of vtest,x and vtest,x in order to optimise vx, vy for each pixel. Is this correct? How do you chose the increments in your rnage of vtest?

    Following on from Daniel's comment, what do you think would be the challenges and limitations in applying this method to fast flowing tidewater glaciers, where not only maximum velocities but also spatial gradients in velocity (e.g. across shear margins) are much larger?

    Many thanks

    • AC5: Reply to CC2, Silvan Leinss, 11 May 2020

      Dear Matt, 

      please apologize the late reply. Yes the sampling is done exactly the way you described. As we are aiming for sub-pixel accuracy over the entire stack (i.e. a shift of a sub-pixel between the very first and the very last image), the final number of sampling increments are given by vmax * DeltaTmax / (resolution * subpixel). Assuming a maximum expected velocity of 1m/day, a maximum time span of the stack of DeltaTmax = 100 days, a resolution of 5m/px and a subpixel-accuracy of 0.5 pixels over the entire stack results in 1*100/(5*0.5) = 40 samples between 0 and vmax. As we actually sample between -vmax and +vmax in two dimensions we need to sample (40*2)^2 = 1600 operations of sub-stack averaging and cross-correlation using a correlation window of 30 px. As this is a pretty intense computional load we use apply a pyramid-approach and downsample the original image each for each level by a factor of 2 (we use 4 levels, including full resolution). Thereby we estimate the velocity initially at 8-times lower resolution and need to sample therefore much less velocities values. 8-times lower resolution makes 1*100/(5*8*0.5) = 5 samples between 0 and vmax or in total (5*2)^2 = 100. Then we use the estimated velocity as a starting point for a refined search between vmax +- vmax/2 at double resolution (again in total 100 samples at level 3). Same for level 2. Up to here we used only 300 (instead of 1600 computations). At the final, full resolution we apply a local gradient search. When upsampling the velocity field to a lower pyramid level we apply some smoothing (30 px gauss) to the velocity field to obtain smooth fields. After local gradient search, no smoothing is applied which explains the "noise" in our velocity field.

      Applying this method to a fast-flowing tide water glacier with strong velocity gradients (e.g. at its side) requires a good enough estimate of vmax which increases the number of operations. If the glacier is wide enough that the center velocity can be estimated at the lowest resolution level (wide, i.e. > resolution*2^3*corr-window = 5*8*30 = 1200 meters) the refinement will in each level improve the estimation of the strong gradient and should finally track the velocity at 30 pixel*5m/px = 150 meter resolution.

      The animation I presented contains a velocity gradient of 1m/day over 400 meter at the ice fall. It seems that the velocity is correctly estimated there. The image resolution is 2 meter. If you are interested in a specific glacier (or like to suggest a test glacier for benchmark), let me know.


Display material version 1 – uploaded on 06 May 2020, no comments