Climate Prediction in Reduced Dimensions: A Comparative Analysis of Linear Inverse Modeling, Reservoir Computing and Attention-based Transformers
- 1Los Alamos National Laboratory, Los Alamos, United States of America (balu@lanl.gov)
- 2University of California, Los Angeles
This study focuses on the application of machine learning techniques to better characterize predictability of the spatiotemporal variability of sea surface temperature (SST) on the basin scale. Both, sub-seasonal variability including extreme events (cf. marine heatwaves) and interannual variability are considered.
We rely on dimensionality reduction techniques---linear principal component analysis (PCA) and nonlinear autoencoders and their variants---to then perform the actual prediction tasks in the corresponding latent space using disparate methodologies ranging from linear inverse modeling (LIM) to reservoir computing (RC), and attention-based transformers.
After comparing performance, we examine various issues including the role of generalized synchronization in RC and implicit memory of RC vs. explicit long-term memory of transformers with the broad aim of shedding light on the effectiveness of these techniques in the context of data-driven climate prediction.
How to cite: Nadiga, B. and Srinivasan, K.: Climate Prediction in Reduced Dimensions: A Comparative Analysis of Linear Inverse Modeling, Reservoir Computing and Attention-based Transformers, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12141, https://doi.org/10.5194/egusphere-egu24-12141, 2024.