EGU26-1153, updated on 13 Mar 2026
https://doi.org/10.5194/egusphere-egu26-1153
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Wednesday, 06 May, 10:45–12:30 (CEST), Display time Wednesday, 06 May, 08:30–12:30
 
Hall X4, X4.124
Atmospheric classification using lidar data and deep learning-based image segmentation
Adrián Canella-Ortiz1,2,3, Siham Tabik3,4, Sol Fernández-Carvelo1,2, Onel Rodríguez-Navarro1,2, Lucas Alados-Arboledas1,2, and Ana del Águila1,2
Adrián Canella-Ortiz et al.
  • 1Andalusian Institute for Earth System Research (IISTA-CEAMA), University of Granada, Granada, Spain
  • 2Department of Applied Physics, University of Granada, Granada, Spain
  • 3Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
  • 4Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain

Reliable identification of aerosols and clouds in multiwavelength lidar observations remains essential for atmospheric monitoring and climate research. However, conventional processing pipelines rely heavily on expert-driven inversions and threshold-based algorithms. In this work, we present a deep-learning (DL) image segmentation framework designed to operate directly on image-like representations of the range-corrected signal (RCS) and applicable across distinct lidar platforms.

The models were trained on DL4Lidar, a new expert-annotated dataset derived from the ALHAMBRA multi-spectral Raman lidar (Granada, Spain). Using Mask R-CNN implemented using Detectron2 framework, we systematically explored wavelength selection, visualization scale bounds, and architectural variants to maximize the discrimination of atmospheric structures. The resulting class-specific models capture the characteristic morphology and spatiotemporal variability of aerosols and clouds without relying on inversion-based preprocessing, demonstrating the suitability of computer-vision techniques for processing raw lidar observations.

To assess robustness beyond the training instrument, the trained models were directly applied, without retraining or domain adaptation, to measurements from MULHACEN, an independent Raman lidar located in the same facilities as ALHAMBRA but with different hardware characteristics and signal levels. Despite these instrumental differences, the models exhibit stable behavior, correctly identifying cloud and aerosol structures across a wide range of atmospheric situations. This cross-instrument evaluation highlights the capacity of the proposed method to generalize under realistic domain shifts, suggesting that morphological characteristics learned from RCS imagery are transferable across similar ground-based systems.

Experiments and sensitivity analysis of the models will be evaluated for different variables such as attenuated backscatter vs. RCS used as input images. Moreover, the best DL model resulting from the sensitivity analysis will be tested on other lidar instruments within the EARLINET/ACTRIS network and spaceborne observations such as ATLID onboard the EarthCARE mission.

Overall, this work introduces a unified DL-based pipeline for atmospheric structure segmentation from multi-wavelength lidar measurements, demonstrating its potential for operational use and large-scale automated analysis for atmospheric classification across heterogeneous lidar platforms.

Acknowledgements

This research is part of the Spanish national project PID2023-151817OA-I00, titled DeepAtmo, funded by MICIU/AEI/10.13039/501100011033.

How to cite: Canella-Ortiz, A., Tabik, S., Fernández-Carvelo, S., Rodríguez-Navarro, O., Alados-Arboledas, L., and del Águila, A.: Atmospheric classification using lidar data and deep learning-based image segmentation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1153, https://doi.org/10.5194/egusphere-egu26-1153, 2026.