EGU24-7244, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-7244
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Typhoon precipitation forecasting based on Satellite Imagery Sequencing and Generative Adversarial Networks

Nan Yang and Xiaofeng Li
Nan Yang and Xiaofeng Li
  • Institute of Oceanology, Chinese Academy of Sciences, China (yangnan@qdio.ac.cn)

Precipitation forecasting with Typhoons (especially nowcasting), a short-term (up to two hours) high-resolution forecasting, is arguably one of the most demanding tasks. Traditional forecasting methods contain 1) Ensemble numerical weather prediction (NWP) systems and 2) advect precipitation fields with radar-based wind estimates via optical flow techniques. The former simulates coupled physical equations of the atmosphere to generate multiple precipitation forecasts. In the latter methods, motion fields are estimated by optical flow, smoothness penalties are used to approximate an advection forecast, and stochastic perturbations are added to the motion field and intensity model. However, these methods either do not meet the requirement on time or rely on the advection equation. These drawbacks limit the performance of precipitation forecasting. Satellite imagery benefits from machine learning technologies, e.g., deep learning, which can be regarded as video frames and is expected to be a promising approach to solving precipitation nowcasting tasks.

Convolutional neural networks (CNN), recurrent neural networks (RNN), and their combination are used to generate future frames with the previous context frames. In general, CNN is employed to capture spatial dependencies, while RNN aims to capture temporal dependencies. However, CNN suffers from inductive bias (i.e., translation invariance and locality), which cannot capture location-variant information (i.e., natural motion and transformation) and fails to extract long-range dependencies. As for RNN, the process of long back-propagation is time-consuming because of its recurrent structure. Therefore, the above drawbacks lack these methods’ operational utility and can not provide skillful precipitation forecasting.

This work proposes a fire-new artificial intelligence model to achieve skillful precipitation forecasting with Typhoons. The satellite Imagery containing precipitation is made into a series of sequences, each containing multiple frames over time. We re-design the traditional CNN-RNN-based architecture that can solve the problem of information loss/forgetting and provide skillful precipitation forecasting. Furthermore, we introduce the generative adversarial strategy and propose a novel random-patch loss function. It ensures that the model can generate high-fidelity precipitation forecasting. In summary, our proposed model simplifies the complex TC precipitation forecasting into a video prediction problem, greatly avoiding many uncertainties in the physical process and facilitating a fully data-driven artificial intelligence paradigm using deep learning and satellite image sequencing for discovering insights for weather forecasting-related sciences.

How to cite: Yang, N. and Li, X.: Typhoon precipitation forecasting based on Satellite Imagery Sequencing and Generative Adversarial Networks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7244, https://doi.org/10.5194/egusphere-egu24-7244, 2024.