EGU25-16891, updated on 15 Mar 2025
https://doi.org/10.5194/egusphere-egu25-16891
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 02 May, 15:05–15:15 (CEST)
 
Room -2.92
Reconstructing 3D cloud fields from multispectral satellite images using deep learning
Stella Girtsou1, Lilli Freischem2, Kyriaki-Margarita Bintsi3, Guiseppe Castiglione4, Emiliano Diaz Salas-Porras5, Michael Eisinger6, Emmanuel Johnson7, William Jones2, Anna Jungbluth6, and Joppe Massant8
Stella Girtsou et al.
  • 1National Observatory of Athens , Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, Athens, Greece (sgirtsou@noa.gr)
  • 2Atmospheric, Oceanic and Planetary Physics, University of Oxford, UK (lilli.freischem@physics.ox.ac.uk)
  • 3Department of Computing, Imperial College London, UK
  • 4University of Sussex, UK
  • 5Universitat de València, Spain
  • 6European Space Agency
  • 7CSIC-UCM-IGEO, Spain
  • 8Royal Belgian Institute of Natural Sciences, Belgium

Clouds affect Earth’s radiation balance by reflecting incoming sunlight (cooling effect) and trapping outgoing infrared radiation (warming effect). Their vertical distribution in the atmosphere significantly influences their radiative properties and overall climate impacts. However, how clouds will respond to climate change remains unknown: cloud feedbacks are the largest source of uncertainty in climate projections. Global 3D cloud data can help reduce these uncertainties, improve climate predictions, and support better decision-making.

Clouds are observed globally from space using satellites, which provide insights into their distribution, structure, and evolution. Observations from the Cloud Profiling Radar (CPR) aboard NASA’s CloudSat mission have provided valuable information on the vertical distribution of clouds. However, its long revisit times (~ 16 days), narrow swath (1.4 km) and observations limited to the same local time each day hinder our ability to study the temporal evolution of clouds or their diurnal cycle. In contrast, imaging instruments observe larger regions with higher temporal resolution but only offer a top-down view with limited vertical information.

In this work, we apply deep learning to images observed by geostationary satellites paired with vertical cloud profiles to extrapolate the vertical profiles beyond the observed tracks. Specifically, we use 11-channel imagery from the MSG/SEVIRI instrument, colocated with CPR vertical profiles. First, we pre-train models using self-supervised learning methods, specifically (geospatially-aware) Masked Autoencoders, applied to MSG/SEVIRI data from 2010. The pre-trained models are then fine-tuned for the 3D cloud reconstruction task using paired image-profile data. As only a small fraction of images overlap with CloudSat observations, the pre-training step enables us to exploit the full information contained in the MSG/SEVIRI images. We find that pre-training consistently improves reconstruction performance, particularly in complex regions such as the inter-tropical convergence zone. Notably, geospatially-aware pre-trained models incorporating time and coordinate encodings outperform both randomly initialized networks and simpler U-Net architectures, leading to improved reconstruction results compared to previous work.

In the future, we plan to extend this method to longer time periods and apply it to ESA’s EarthCARE data, once available, to further improve 3D reconstructions and enable the development of long-term 3D cloud products.

How to cite: Girtsou, S., Freischem, L., Bintsi, K.-M., Castiglione, G., Diaz Salas-Porras, E., Eisinger, M., Johnson, E., Jones, W., Jungbluth, A., and Massant, J.: Reconstructing 3D cloud fields from multispectral satellite images using deep learning, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16891, https://doi.org/10.5194/egusphere-egu25-16891, 2025.