EGU2020-19847
https://doi.org/10.5194/egusphere-egu2020-19847
EGU General Assembly 2020
© Author(s) 2021. This work is distributed under
the Creative Commons Attribution 4.0 License.

Close-range sensing and object-based analysis of shallow landslides and erosion in grasslands

Andreas Mayr1, Martin Rutzinger1,2, Magnus Bremer1,2, and Clemens Geitner1
Andreas Mayr et al.
  • 1University of Innsbruck, Institute of Geography, Innsbruck, Austria
  • 2Austrian Academy of Sciences, Institute for Interdisciplinary Mountain Research, Innsbruck, Austria

Close-range sensing methods for topographic data acquisition, such as Structure-from-Motion with multi-view stereo (SfM-MVS) photogrammetry and laser scanning from the ground or from unmanned aerial systems (UAS), have strongly improved over the last decade. As they are providing data with sub-decimetre resolution and accuracy, these methods open new possibilities for bridging the gap between local in-situ observations and area-wide space-borne or aerial remote sensing. For assessments of shallow landslides and erosion patches, which are wide-spread phenomena in mountain grasslands, the potential of close-range sensing is two-fold: Firstly, it could provide accurate reference data for assessing the geometric accuracy of a catchment or regional scale eroded area monitoring based on aerial or satellite remote sensing systems. Secondly, selected sites can be monitored at a very detailed local scale to reveal processes of secondary erosion or natural vegetation succession and slope stabilisation. Furthermore, high-resolution 4D data from multi-temporal close-range sensing make it possible to quantify volumes and rates of displacement at erosion features. In this contribution, we propose to exploit this potential of close-range sensing for landslide and erosion studies with object-based approaches for raster and 3D point cloud analyses. Assuming that erosion features can be discriminated from undisturbed grassland and from trees and shrubs, based on their morphometric and spectral signatures, we show how computer vision and machine learning techniques help to detect and label these features automatically as spatial objects in the data. We combine this object detection and labelling with 2.5D differential elevation models and with 3D deformation analysis of point clouds. This strategy addresses one of the key challenges of automatically analysing close-range sensing data in geomorphological studies, i.e. linking geometric information (such as the size and shape of erosion features or the surface change across a time series) with semantic information (e.g. separating vegetation from complex ground structures). In three case studies from recent projects in the Alps, where we acquired data by UAS, terrestrial laser scanning and terrestrial photogrammetry, we demonstrate the use of these new methodological developments. The methods tested can reliably detect changes with minimum magnitudes of centimetre to decimetre level, depending primarily on the specific data acquisition setup. By automatically relating these changes to erosion features of different scales (i.e. both at entire eroded areas and at their components, e.g. collapsing parts of the scarp), such analyses can provide valuable insights regarding process dynamics. In our tests, close-range sensing and automated data analysis workflows helped to understand both the development of new eroded areas as well as their enlargement by secondary erosion processes or episodic landslide reactivation. Based on the experience from these case studies, we also discuss the main challenges and limitations of these methods for erosion monitoring applications.

How to cite: 1

Display materials

Display file

Comments on the display material

AC: Author Comment | CC: Community Comment | Report abuse

Display material version 1 – uploaded on 08 May 2020
  • CC1: Comment on EGU2020-19847, Romina Vanessa Barbosa, 08 May 2020

    Hello,

    I am interested in knowing about the way you have defined the segments to classify the grass, etc. I see that you use diverse features at diverse scales, so, how did you select the final features? did you mix diverse scales? 

    thanks,

    Romina 

    • AC1: Reply to CC1, Andreas Mayr, 08 May 2020

      Hi Romina,
      Thanks for your interest in this critical point. I assume that you mean the point cloud segmentation (not the image segmentation in the first case). Sorry, this step is quite complicated, and I am not really content with it ... Maybe Fig. 2 in Mayr et al. (2017) helps with understanding the concept.

      At this step, we used only one scale (a neighbourhood with a radius of 0.2 m) and assumed that three of the point cloud features are particularly relevant for separating objects (3D/2D density ratio, omnivariance and geometric curvature). An unsupervised preclassification in this morphometric feature space is performed by k-means clustering (with k = 10). The point cloud epochs are merged into one point cloud before being clustered, to optimise the overall fit of this pre-classification for the entire time series.Subsequently, this point cloud is again split into the epochs and the assigned feature-space clusters are used to constrain the segmentation of each point cloud (separately) in the spatial domain. This aims at creating segments with unique object associations which are semantically consistent for all point cloud epochs.

      To keep the segments small and compact, the x and y coordinates additionally constrain the region growing, with a tolerance maximum of +-0.6 m from the seed point allowed. Importantly, this criterion enforces an initial over-segmentation and prevents excessive generalisation of point cloud features.

      Moreover, the cloud-to-cloud distances (distanceC2C) of each point cloud to its predecessor and to its successor point cloud in the time series are calculated. These distances are used to discriminate areas of change (distanceC2C > 0.15 m) and stable areas (distanceC2C < 0.15 m). This criterion prevents the region growing from including both areas of change and stable areas in the same segment.

      Any alternative ideas for a more straightforward point cloud segmentation, which might work in natural environments (almost no planar objects! objects not as clearly constrained as in built environments) are very welcome!
      Best regards,
      Andreas

      • CC2: Reply to AC1, Romina Vanessa Barbosa, 08 May 2020

        Thank you for your reply!

        Romina