- 1School of Geography, Planning, and Spatial Sciences, University of Tasmania, Hobart, Australia (laura.sotomayor@utas.edu.au)
- 2School of Biological Sciences, University of Adelaide, Adelaide, Australia (megan.lewis@adelaide.edu.au)
- 3Sensor-based Geoinformatics (geosense), University of Freiburg, Germany (teja.kattenborn@geosense.uni-freiburg.de)
- 4CSIRO Environment, Locked Bag 2, Glen Osmond, 5064, SA, Australia (shaun.levick@csiro.au)
This study leverages deep learning to enhance the identification of trees, shrubs, grasses, and other vegetation as distinct plant growth forms, which are essential for deriving vegetation structural properties. These properties are crucial for improving the identification of Fractional Vegetation Cover (FVC) components and the structural elements contributing to FVC, such as separating non-photosynthetic vegetation (NPV) on the ground (litter and coarse woody debris) from standing wood or distinguishing grasses, shrubs, and tree canopies by incorporating 3D data. Fractional Vegetation Cover (FVC)—comprising green vegetation (GV), non-photosynthetic vegetation (NPV), and bare earth (BE)—is particularly challenging to discriminate and map in centimetre-scale Unoccupied Aerial Systems (UAS) imagery due to spectral similarities and environmental variability.
To address these challenges, this study combines LiDAR voxel data, transformed into 2D raster representations, with multispectral imagery. Each raster channel encodes aggregated attributes such as mean voxel height, point density, maximum height, and intensity. These inputs serve as the foundation for a 2D U-net deep learning model trained using reference datasets from Calperum Station in a semi-arid ecosystem in South Australia. By incorporating canopy and ground elements, such as NPV (e.g., litter and coarse woody debris), this approach aims to enhance the model’s capacity for accurate FVC classification.
Initial experiments yielded promising results. Site-specific models achieved high overall accuracies exceeding 89% and F1 scores above 0.9, but their performance declined to approximately 69% in dense vegetation. For the generic model, accuracy dropped further to 28.48%, highlighting significant challenges in generalisation across diverse vegetation types. These findings underscore the limitations posed by complex environments, limited reference data, and the low frequency of NPV as a minority class. To address these issues, further advancements are proposed, including integrating additional LiDAR data, expanding training datasets, and employing data augmentation techniques. Data augmentation, in particular, can address environmental and illumination variability, improving the model’s ability to learn underrepresented classes and increasing robustness across diverse ecosystems.
The anticipated outcomes include improved identification of plant growth forms, with the potential for more reliable estimates of vegetation structural metrics. These advancements support derived estimates of aboveground biomass, enhanced water content assessment, and the evaluation of other critical ecosystem services. This framework leverages voxel-projected features to support vegetation analysis and improve classification performance. Additionally, it aims to enable high-resolution mapping of FVC components under plant growth form, bridging the gap between fine-scale UAS observations and regional-scale satellite imagery to support ecosystem monitoring.
How to cite: Sotomayor, L., Lucieer, A., Turner, D., Lewis, M., Levick, S., and Kattenborn, T.: Deep learning for identification of 3D plant growth forms in Fractional Vegetation Cover, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-3984, https://doi.org/10.5194/egusphere-egu25-3984, 2025.