EGU22-12086
https://doi.org/10.5194/egusphere-egu22-12086
EGU General Assembly 2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

GAN-based video prediction model for precipitation nowcasting

Yan Ji1,2, Bing Gong2, Michael Langguth2, Amirpasha Mozaffari2, Karim Mache2, Martin Schultz2, and Xiefei Zhi1
Yan Ji et al.
  • 1Nanjing University of Information Science & Technology, 210044, Nanjing, China (y.ji@fz-juelich.de)
  • 2Jülich Supercomputing Centre, Forschungszentrum Jülich, 52425 Jülich, Germany

Detecting and predicting heavy precipitation for the next few hours is of great importance in weather related decision-making and early warning systems. Although great progress has been achieved in convective-permitting numerical weather prediction (NWP) over the past decades, video prediction models based on deep neural networks have become increasingly popular over the last years for precipitation nowcasting where NWP models fail to capture the quickly varying precipitation patterns. However, previous video prediction studies for precipitation nowcasting showed that heavy precipitation events are barely captured. This has been attributed to the optimization on pixel-wise losses which fail to properly handle the inherent uncertainty.  Hence, we present a novel video prediction model, named CLGAN, embedding the adversarial loss is proposed in this study which aims to generate improved heavy precipitation nowcasting. The model applies a Generative Adversarial Network (GAN) as the backbone. Its generator is a u-shaped encoder decoder network (U-Net) equipped with recurrent LSTM cells and its discriminator constitutes a fully connected network with 3-D convolutional layers. The Eulerian persistence, an optical flow model DenseRotation and an advanced video prediction model PredRNN-v2 serve as baseline methods for comparison. The models performance are evaluated in terms of application-specific scores including root mean square error (RMSE), critical success index (CSI), fractions skill score (FSS) and the method of object-based diagnostic evaluation (MODE). Our model CLGAN is superior to the baseline models for dichotomous events, i.e. the CSI, with a threshold of heavy precipitation (8mm/h), is significantly higher, thus revealing improvements in accurately capturing heavy precipitation events. Besides, CLGAN outperforms in terms of spatial scores such as FSS and MODE. We conclude that the predictions of our CLGAN architecture match the stochastic properties of ground truth precipitation events better than those of previous video prediction methods. The results encourage the applications of GAN-based video prediction architectures for extreme precipitation forecasting.

How to cite: Ji, Y., Gong, B., Langguth, M., Mozaffari, A., Mache, K., Schultz, M., and Zhi, X.: GAN-based video prediction model for precipitation nowcasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12086, https://doi.org/10.5194/egusphere-egu22-12086, 2022.