EGU25-19171, updated on 15 Mar 2025
https://doi.org/10.5194/egusphere-egu25-19171
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Friday, 02 May, 10:45–12:30 (CEST), Display time Friday, 02 May, 08:30–12:30
 
Hall X4, X4.135
Towards a Scalable Deep Learning Framework for Forest Monitoring under Challenging Conditions with Multimodal Data
Lorenzo Beltrame1, Jules Salzinger1, Jasmin Lampert2, and Phillipp Fanta-Jende1
Lorenzo Beltrame et al.
  • 1Austrian Institute of Technology, Vision Automation and Control, Wien, Austria
  • 2Austrian Institute of Technology, Digital Safety and Security, Wien, Austria

Frequent cloud cover and terrain-induced shadows pose significant challenges for reliable forest monitoring. Traditional monitoring methods, such as ground-based observations and aerial surveys, often suffer from low temporal resolution, making it difficult to track seasonal changes or detect sudden forest anomalies, such as windthrow damage. Earth Observation (EO), particularly Sentinel-2 imagery, offers the potential for high revisit rates and global coverage, but these advantages are diminished by the persistent presence of clouds and shadows, particularly during winter months in mountainous areas. The tasks of forest anomaly detection and windthrow damage assessment particularly benefit from the increased temporal resolution provided by cloud and shadow free Sentinel-2 imagery. 

The SAFIR project aims to develop a scalable and robust framework for comprehensive forest monitoring, with a focus on resilience in complex terrains, including mountainous regions. Within the project and to fully leverage the advantages of EO, it is crucial to implement effective preprocessing techniques to address cloud and shadow disturbances. These challenges can be overcome by employing a method that predicts missing image information by reconstructing the albedo. This process involves integrating spatial, spectral, temporal, and physical priors into the image restoration, allowing for the extraction of meaningful information from partially obscured satellite measurements. 

This contribution introduces a concept for a modular deep learning framework designed to process cloudy or shadowed satellite images and predicting the corresponding albedo values. The framework consists of two core modules: a shadow remover and a cloud remover. Both modules undergo pretraining on large cloud-free satellite datasets to build robust spatiotemporal embeddings. They are subsequently fine-tuned using physics-based methods to improve accuracy in restoring obscured and clouded image areas. Unlike traditional approaches that prioritize visual clarity, this framework is optimized for machine learning. The objective is to create enhanced data products for downstream forest monitoring applications. The effectiveness of this approach is validated by comparing the results with non-enhanced Sentinel-2 data, making the downstream tasks a methodological validation step.  

Validation is also conducted using multimodal data, integrating satellite imagery with high-resolution Unmanned Aerial Vehicle (UAV) data. The planned UAV campaigns, conducted in Portugal, Germany and Austria, capture low-altitude imagery at 120 m. Hence, they provide ground-truth validation by revealing surface conditions beneath cloud cover. This validation step supports the fine-tuning of the image restoration models and ensures that restored satellite images align closely with real-world conditions.  

By leveraging heterogeneous data sources, including high-quality in situ UAV data, this contribution introduces a scalable concept for high-frequency satellite monitoring. The framework aims to go beyond experimental setups and achieve operational deployment in the GTIF initiative by ESA, making EO more efficient. 

How to cite: Beltrame, L., Salzinger, J., Lampert, J., and Fanta-Jende, P.: Towards a Scalable Deep Learning Framework for Forest Monitoring under Challenging Conditions with Multimodal Data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19171, https://doi.org/10.5194/egusphere-egu25-19171, 2025.