EGU25-13734, updated on 15 Mar 2025
https://doi.org/10.5194/egusphere-egu25-13734
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Tuesday, 29 Apr, 10:00–10:10 (CEST)
 
Room C
Reconstructing 3D vertical cloud profiles using cloud dynamics
Emiliano Diaz1, Kyriaki-Margarita Bintsi7, Giuseppe Castiglioni2, Michael Eisinger3, Lilli Freischem4, Stella Girtsou5, Emmanuel Johnson6, William Jones4, Anna Jungbluth3, and Joppe Massant
Emiliano Diaz et al.
  • 1Universitat de Valencia, (emiliano.diaz@uv.es)
  • 2University of Sussex (g.m.a.castiglione@gmail.com)
  • 3European Space Agency (anna.jungbluth@esa.int)
  • 4University of Oxford (william.jones@physics.ox.ac.uk)
  • 5National Observatory of Athens (girtsou.s@gmail.com)
  • 6UN Environment Programme (jemanjohnson34@gmail.com)
  • 7University of Harvard (margarita.3110@hotmail.com)

Clouds influence Earth’s climate by reflecting sunlight and trapping heat, but their role in climate change remains uncertain, causing major unpredictability in models. Global 3D cloud data can improve predictions.

Observations from NASA’s CloudSat mission have advanced our understanding of cloud structures but are limited by long revisit times and narrow coverage. Imaging instruments offer broader, faster coverage but lack vertical information.

In [1] a deep learning approach addressed this challenge by combining MSG/SEVIRI satellite imagery with CloudSat profiles to extrapolate vertical cloud structures beyond observed tracks. Using geospatially-aware Masked Autoencoders, models were pre-trained on a year of MSG data (2010) and fine-tuned with CloudSat tracks as ground truth. This self-supervised training improved reconstruction, outperforming previous methods and simpler architectures [2].

In this work, we explore to what degree including information of the temporal dynamics of clouds can further improve the quality of the 3D cloud reconstruction. Instead of using a single image  as input we use a temporal sequence of MSG/SEVIRI images, spanning a period of several hours before and after the target cloud vertical profile. We use a combination of the geospatial encodings used in [1] and the temporal encoding used in [3] to embed these spatiotemporal MSG/SEVIRI cubes in rich, general purpose latent space. We then use a finetuning model as in [1] to map the embeddings into 3D radar reflectivity maps. 

We perform a sensitivity analysis to explore how the quality of the reconstruction varies as a function of the amount of temporal information included. We also explore the relative strengths of different pre-training strategies with respect to the quality of the 3D reflectivity reconstruction and cloud type segmentations. With this, we provide insights on self-supervised learning for atmospheric applications.

References

  • Stella Girtsou et al. “3D Cloud reconstruction through geospatially-aware Masked Autoencoders” 2024. arXiv: 2501.02035 [cs.CV]. URL: https://arxiv.org/abs/2501.02035.
  • Sarah Brüning et al. “Artificial intelligence (AI)-derived 3D cloud tomography from geostationary 2D satellite data”. en. In: Atmos. Meas. Tech. 17.3 (Feb. 2024), pp. 961–978.
  • Yezhen Cong et al. SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery. 2023. arXiv: 2207.08051 [cs.CV]. URL: https://arxiv.org/abs/2207.08051.

How to cite: Diaz, E., Bintsi, K.-M., Castiglioni, G., Eisinger, M., Freischem, L., Girtsou, S., Johnson, E., Jones, W., Jungbluth, A., and Massant, J.: Reconstructing 3D vertical cloud profiles using cloud dynamics, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-13734, https://doi.org/10.5194/egusphere-egu25-13734, 2025.