EGU25-10295, updated on 15 Mar 2025
https://doi.org/10.5194/egusphere-egu25-10295
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Wednesday, 30 Apr, 14:00–15:45 (CEST), Display time Wednesday, 30 Apr, 14:00–18:00
 
Hall X3, X3.140
Automatic Quantification of Instream Large Wood Storage Combining Machine Learning and High-Resolution Aerial Imagery
Janbert Aarnink1, Gabriele Consoli1, Bryce Finch1, Marc O'Callaghan1, Ivan Pascal1, Samuel Wiesmann3, and Virginia Ruiz-Villanueva2
Janbert Aarnink et al.
  • 1University of Lausanne, IDYST, RivES, Switzerland (janbert.aarnink@unil.ch)
  • 2Institute of Geography, University of Bern, Switzerland
  • 3Swiss National Park, Zernez, Switzerland

Large wood (LW) plays a fundamental role in maintaining the health and functionality of river ecosystems. LW influences hydrodynamics by altering flow patterns, contributes to sediment transport processes by trapping and redistributing sediment, and shapes diverse river channel forms. Moreover, LW enhances habitat complexity and diversity, sustaining biodiversity. Understanding and accurately quantifying LW storage is vital for a range of river management activities, including designing effective habitat restoration projects and implementing flood mitigation strategies. However, traditional field surveys and manual analysis of aerial imagery are labour-intensive, time-consuming, and limited in spatial and temporal scope. Advancing tools and techniques for LW quantification is therefore critical to enabling more efficient and widespread integration of wood into river restoration efforts.
This study introduces a fully automated method integrating high-resolution drone imagery and advanced machine learning algorithms to detect and quantify instream LW. Leveraging convolutional neural networks (CNNs), we trained a YOLOv10 model for wood detection and a YOLOv8 model for wood segmentation using datasets from eight rivers in the Swiss Alps and Argentinean Andes. An independent dataset from the Avançon de Nant River in Switzerland was used for method validation, ensuring the robustness and generalizability of the approach. 

Our detection model achieved a 90\% accuracy in wood volume estimation and identified 97\% of wood pieces in the largest size bracket at a 0.3 confidence threshold, demonstrating high detection reliability. The segmentation model reached a mean Average Precision (mAP) of 70\%, effectively distinguishing wood pixels from background pixels despite slight underestimations in wood diameters for short and wide pieces. By automating both detection and volume estimation, our method addresses the limitations of traditional field-based approaches and significantly reduces human effort and potential for error.

The approach effectively detected wood across different environmental conditions, although challenges such as differentiating wood from similar-coloured substrates and accounting for partially submerged pieces remain. Expanding the training dataset to include more diverse environmental scenarios could enhance model accuracy and reliability.

This scalable and efficient method has substantial implications for monitoring river wood dynamics over large spatial and temporal scales. It provides a powerful and easy tool for scientists, conservationists, and river managers to understand wood storage better, improve habitat restoration efforts, and implement more informed flood risk management practices. Integrating UAV technology and machine learning significantly advances fluvial geomorphology studies, enabling consistent data collection in complex natural environments and informing sustainable management strategies.

How to cite: Aarnink, J., Consoli, G., Finch, B., O'Callaghan, M., Pascal, I., Wiesmann, S., and Ruiz-Villanueva, V.: Automatic Quantification of Instream Large Wood Storage Combining Machine Learning and High-Resolution Aerial Imagery, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-10295, https://doi.org/10.5194/egusphere-egu25-10295, 2025.