EGU24-10034, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-10034
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Operationalize large-scale point cloud classification: potentials and challenges

Onur Can Bayrak1,2, Ma Zhenyu2, Elisa Mariarosaria Farella2, and Fabio Remondino2
Onur Can Bayrak et al.
  • 1Department of Geomatics Engineering, Faculty of Civil Engineering, Yildiz Technical University, Istanbul, Turkey (onurcb@yildiz.edu.tr)
  • 2Bruno Kessler Foundation, 3D Optical Metrology (3DOM), Trento, Italy (obayrak@fbk.eu, zma@fbk.eu, elifarella@fbk.eu, remondino@fbk.eu)

Urban and natural landscapes are distinguished by different built and vegetated elements with unique features, and their proper identification is crucial for many applications, from urban planning to forestry inventory or natural resources management. With the rapid evolution and deployment of high-resolution airborne and Unmanned Aerial Vehicle (UAV) technologies, large areas can be easily surveyed to create high-density point clouds. Photogrammetric cameras and LiDAR sensors can offer unprecedented high-quality 3D data (a few cm on the ground) that allows for discriminating and mapping even small objects. However, the semantic enrichment of these 3D data is still far from being a fully reliable, accurate, unsupervised, explainable and generalizable process deployable at large scale, on data acquired with any sensor, and at any possible spatial resolution.

This work reports the state-of-the-art and recent developments in urban and natural point cloud classification, with a particular focus on the:

  • Standardization in defining the semantic classes through a multi-resolution and multi-scale approach: a multi-level concept is introduced to improve and optimize the learning process by means of a hierarchical concept to accommodate a large number of classes. 
  • Instance segmentation in very dense areas: closely located and overlapping individual objects require precise segmentation to be accurately identified and classified. We are developing a hierarchical segmentation method specifically designed for urban furniture with small samples to enhance the comprehensiveness of dense urban areas.
  • Generalization of the procedures and transferability of developed models from a fully-labelled domain to an unseen scenario.
  • Handling of under-represented objects (e.g., pole-like objects, pedestrians, and other urban furniture): classifying under-represented objects presents a unique set of challenges due to their sparse occurrence and similar geometric characteristics. We introduce a new method that specifically targets the effective identification and extraction of these objects in combination with knowledge-based methods and deep learning.
  • Available datasets and benchmarks to evaluate and compare learning-based methods and algorithms in 3D semantic segmentation: urban-level aerial 3D point cloud datasets can be classified according to the presence of color information, the number of classes, or the type of sensor used for data gathering. The ISPRS - Vaihingen, DublinCity, DALES, LASDU and CENAGIS-ALS datasets, although extensive in size, do not provide color-related information. Conversely, Campus3D, Swiss3DCities, and Hessigheim3D include color data but feature limited coverage and a few class labels. SensatUrban, STPLS3D, and HRHD-HK were collected across extensive urban regions, but they also present a reduced number of classes. YTU3D surpasses other datasets in terms of class diversity, but it encompasses less extensive areas than SensatUrban, STPLS3D, and HRHD-HK. Despite these differences, the common deficiency in all datasets is the presence of classes with under-represented objects, the limited generalization, and the low accuracy in classifying unbalanced categories, making using these models difficult for real-life scenarios.

The presentation will highlight the importance of semantic enrichment processes in the geospatial and mapping domain and for providing more understandable data to end-users and policy-makers. Available learning-based methods, open issues in point cloud classification and recent progress will be explored over urban and forestry scenarios.

How to cite: Bayrak, O. C., Zhenyu, M., Farella, E. M., and Remondino, F.: Operationalize large-scale point cloud classification: potentials and challenges, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10034, https://doi.org/10.5194/egusphere-egu24-10034, 2024.