EGU24-13977, updated on 09 Mar 2024
https://doi.org/10.5194/egusphere-egu24-13977
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Representing the atmosphere using deep learning techniques: applications in radar echo data reconstruction and PM2.5 forecast error reduction

Mingming Zhu1, Lin Wu1,2, Hang Su1, and Zifa Wang1
Mingming Zhu et al.
  • 1Institute of Atmospheric Physics, Chinese Academy of Sciences, Key Laboratory of Atmospheric Environment and Extreme Meteorology, Beijing, China (zhumingming@mail.iap.ac.cn; wlin@mail.iap.ac.cn; suhang@mail.iap.ac.cn; zifawang@mail.iap.ac.cn)
  • 2Institute of Atmospheric Physics, Chinese Academy of Sciences, Carbon Neutrality Research Center (CNRC), Beijing, China (wlin@mail.iap.ac.cn)

The atmosphere is governed by laws of atmospheric physics and chemistry. For decades even centuries, these laws are represented by differential equations, usually solved numerically for the scale defined by model grid cells. However, this representation paradigm reaches its limits when the underlying physics or chemistry is too complex or even unknown, especially when considering the multiscale nature of the atmosphere. In our data era these laws are stored in immense datasets of various observations and numerical simulations, while artificial intelligence techniques can retrieve them from data albeit often with limited physical interpretations. Hybrid modeling [1] can thus balance the physical modeling and the data-driven modeling for a more comprehensive representation of the atmosphere. By representation learning, the multiscale features of the atmosphere can be learnt and encoded in the weights of connected neurons in the deep networks of multiple layers, as is fundamentally different from the traditional atmospheric representation using formulae and equations. Here we elaborate this new deep learning-based representation paradigm with two demonstrating cases. In the first case [2], we reveal a multiscale representation of the convective atmosphere by reconstructing the radar echoes from the Weather Research and Forecasting (WRF) model simulations and the Himawari-8 satellite products using U-Net deep networks. We then diagnose the physical interpretations of the learnt representation with a sensitivity analysis method. We find stratified features with small-scale patterns such as echo intensities sensitive to the WRF-simulated dynamic and thermodynamic variables and with larger-scale information about shapes and locations mainly captured from satellite images. Such explainable representation of the convective atmosphere would inspire innovative data assimilation methods and hybrid models that could overcome the conventional limits of nowcasting. In the second case [3], we employ deep convolutional neural networks (CNN) to represent the errors associated with fine particulate matter (PM2.5) forecasts of a chemistry-transport model (CTM), the Nested Air Quality Prediction Modeling System (NAQPMS), within 240-hour lead times across 180 monitoring sites in the Yangtze River Delta (YRD) region of China. The learnt multiscale error representation reduces the PM2.5 forecasts’ root mean square error (RMSE) by 16.3-34.2% on test cases in 2017-2018. We then probe the physical interpretation of the multiscale error representation using the deep learning important features (DeepLIFT) interpretability method. We quantify the significant contribution from sulfur dioxide (SO2, 31.3%) and ozone (29.4%), which are comparable to PM2.5 (31.1%) and about three times higher than nitrogen dioxide (8.2%). Such interpretations would suggest that improvements are needed in formulating the SO2-sensitive pollution in the ammonia-poor YRD region. We consider our representation studies as a step towards more comprehensive atmospheric hybrid models that take advantage of the mighty artificial intelligence technologies but are at the same time physically explainable.

[1] Liao, Q., Zhu, M., Wu, L. et al. 2020. https://doi.org/10.1007/s40726-020-00159-z

[2] Zhu, M., Liao, Q., Wu, L. et al. 2023. https://doi.org/10.3390/rs15143466

[3] Zhu, M., Liao, Q., Wu, L. et al. 2024. In submission.

How to cite: Zhu, M., Wu, L., Su, H., and Wang, Z.: Representing the atmosphere using deep learning techniques: applications in radar echo data reconstruction and PM2.5 forecast error reduction, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13977, https://doi.org/10.5194/egusphere-egu24-13977, 2024.