EGU23-2909, updated on 22 Feb 2023
https://doi.org/10.5194/egusphere-egu23-2909
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Long-Term Forecasting of Environment variables of MERRA2 based on Transformers

Tsengdar Lee1, Sujit Roy2,3, Ankur Kumar2,3, Rahul Ramachandran2, and Udaysankar Nair3
Tsengdar Lee et al.
  • 1NASA Headquarters, Washington, DC, United States of America (tsengdar.j.lee@nasa.gov)
  • 2NASA Marshall Space Flight Center, Huntsville, AL, United States of America
  • 3University of Alabama in Huntsville, Huntsville, AL, United States of America

Transformers in general have shown great promise in the sequence modeling. Recently proposed vision transformer (ViT) by Dosovitskiy et al. has shown optimal performance in image recogining [1]. Fourier Neural operator based token mixer transformers keeping ViT as backbone was proposed by Guibas et. al. has been used for predicting wind and precipitation on ERA5 dataset[2,3]. Following the previous work, we trained the Fourcastnet from scratch on the MERRA2 data set with 3 verticle levels (z450, z500, z550) and 11 variables (adding u, v, and temp). We trained on data from 2005 to 2015 and made predictions by providing the initial conditions from 2017. The prediction was made for 7 days in advance. For the first 24 hours model prediction, mean correlation was 0.998. Root mean squared error (RMSE) 6 hours prediction was 8.779 and for 24 hours was 19.581 on a scale range of -575.6 to 330.6. The model was further tested on 11 variables on the same training data to evaluate prediction of major events like Hurricane. Initial condition for category 5 Hurricane Sep 28, 2016 – Oct 10, 2016 was given to the model. The model was able to predict the hurricane for 18 hours. Further work will be done in order to tune to model and increase more environment variables from MERRA2 to make the prediction more robust and for a longer period.

References:
1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold,
G., Gelly, S. and Uszkoreit, J., 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv
preprint arXiv:2010.11929.
2. Guibas, J., Mardani, M., Li, Z., Tao, A., Anandkumar, A. and Catanzaro, B., 2021, September. Efficient Token Mixing for
Transformers via Adaptive Fourier Neural Operators. In International Conference on Learning Representations
3. Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z.,
Azizzadenesheli, K. and Hassanzadeh, P., 2022. Fourcastnet: A global data-driven high-resolution weather model using
adaptive fourier neural operators. arXiv preprint arXiv:2202.11214.

How to cite: Lee, T., Roy, S., Kumar, A., Ramachandran, R., and Nair, U.: Long-Term Forecasting of Environment variables of MERRA2 based on Transformers, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-2909, https://doi.org/10.5194/egusphere-egu23-2909, 2023.