- Simon Fraser University, Engineering Sciences, Burnaby, B.C., Canada (fha20@sfu.ca)
Increased availability of high-resolution synthetic aperture radar (SAR) data has resulted in SAR speckle tracking becoming an important tool for measuring surface deformations that are too large to be captured reliably with interferometric SAR (InSAR). Speckle tracking finds local two-dimensional offsets for a pair of images by maximizing the normalized cross-correlation between shifted chips from one image with stationary reference chips from the other image. This approach has several drawbacks. It is computationally intensive because the optimum matching problem is solved independently for each reference chip and each image pair, produces displacement maps at a resolution coarser than the input imagery, and often requires significant post-processing cleanup to remove frequent mismatches and noise artifacts.
In this study, we are proposing an alternative approach to traditional speckle tracking that uses unsupervised machine learning (ML) for the non-rigid co-registration of a pair of approximately (globally) preregistered SAR images to derive two-dimensional displacement fields typical of faster composite landslides. With prior global co-registration, the output local offset field directly captures local deformation. Using an ensemble of sufficiently large, sensor-specific datasets from representative displacement test sites as training input, the fully trained ML network can then ingest any SAR image pair acquired by the same sensor, whether previously seen or unknown, and produce a local vector offset field that accurately aligns the images.
The resulting deformation field represents local movements between the two images analogous to the two-dimensional offset maps produced by conventional speckle tracking. Compared to traditional speckle-tracking workflows, the proposed approach is computationally more efficient (e.g., approximately twice as fast when applied to a stack of seven images), as the trained network evaluates a learned parametric function to directly map one (globally pre-registered) SAR image to another, rather than relying on repeated chip-to-chip optimization. Our proposed method also yields substantially cleaner displacement estimates, with reduced noise and approximately 84% fewer outliers. Finally, the resulting two-dimensional offset (deformation) maps nearly preserve the original spatial resolution of the input SAR images.
How to cite: Hosseini, F. and Rabus, B.: An unsupervised machine learning approach to derive two-dimensional displacement from repeat-pass SAR images, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15433, https://doi.org/10.5194/egusphere-egu26-15433, 2026.