EGU21-13334, updated on 05 May 2023
https://doi.org/10.5194/egusphere-egu21-13334
EGU General Assembly 2021
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Deep Learning for Three-Dimensional Volumetric Recovery of Cloud Fields

Yael Sde-Chen1, Yoav Y. Schechner1, Vadim Holodovsky1, and Eshkol Eytan2
Yael Sde-Chen et al.
  • 1Technion - Israel Institute of Technology, Viterbi Faculty of Electrical Engineering, Haifa, Israel
  • 2The Weizmann Institute of Science, Department of Earth and Planetary Science, Rehovot, Israel

Clouds are a key factor in Earth's energy budget and thus significantly affect climate and weather predictions. These effects are dominated by shallow warm clouds (shown by Sherwood et al., 2014, Zelinka et al., 2020) which tend to be small and heterogenous. Therefore, remote sensing of clouds and three-dimensional (3D) volumetric reconstruction of their internal properties are of significant importance.

Recovery of the volumetric information of the clouds relies on 3D radiative transfer, that models 3D multiple scattering. This model is complex and nonlinear. Thus, inverting the model poses a major challenge and typically requires using a simplification. A common relaxation assumes that clouds are horizontally uniform and infinitely broad, leading to one-dimensional modeling. However, generally this assumption is invalid since clouds are naturally highly heterogeneous. A novel alternative is to perform cloud retrieval by developing tools of 3D scattering tomography. Then, multiple satellite images of the clouds are acquired from different points of view. For example, simultaneous multi-view radiometric images of clouds are proposed by the CloudCT project, funded by the ERC. Unfortunately, 3D scattering tomography require high computational resources. This results, in practice, in slow run times and prevents large scale analysis. Moreover, existing scattering tomography is based on iterative optimization, which is sensitive to initialization.

In this work we introduce a deep neural network for 3D volumetric reconstruction of clouds. In recent years, supervised learning using deep neural networks has led to remarkable results in various fields ranging from computer vision to medical imaging. However, these deep learning techniques have not been extensively studied in the context of volumetric atmospheric science and specifically cloud research.

We present a convolutional neural network (CNN) whose architecture is inspired by the physical nature of clouds. Due to the lack of real-world datasets, we train the network in a supervised manner using a physics-based simulator that generates realistic volumetric cloud fields. In addition, we propose a hybrid approach, which combines the proposed neural network with an iterative physics-based optimization technique.

We demonstrate the recovery performance of our proposed method in cloud fields. In a single cloud-scale, our resulting quality is comparable to state-of-the-art methods, while run time improves by orders of magnitude. In contrast to existing physics-based methods, our network offers scalability, which enables the reconstruction of wider cloud fields. Finally, we show that the hybrid approach leads to improved retrieval in a fast process.

How to cite: Sde-Chen, Y., Schechner, Y. Y., Holodovsky, V., and Eytan, E.: Deep Learning for Three-Dimensional Volumetric Recovery of Cloud Fields, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13334, https://doi.org/10.5194/egusphere-egu21-13334, 2021.