- German Climate Computing Center (DKRZ), Data Analysis, Germany (witte@dkrz.de)
High-resolution machine learning faces the challenge of balancing local computation with large physical context windows. GPU memory limitations and the slow training process when distributing the model across multiple GPUs further complicate this task.
We present a transformer model for high-resolution climate-related tasks that uses neural operators within a multi-grid architecture. This approach allows resolution independence, large physical context windows, and the handling of discontinuities such as coastlines.
The model is spatially flexible, supporting both regional and global training schemes. It is also independent of the number of input variables, allowing training to be scaled to large numbers of input variables.
We demonstrate the ability of the model to scale, both spatially and in terms of variables. The model forms the foundation of an approach to learn from the rich and diverse climate data available, enabling high-resolution downscaling, infilling, and predictions.
How to cite: Witte, M., Meuer, J., and Christopher, K.: A Spatial Multi-Grid Neural Operator-Transformer Mdoel for High-Resolution Climate Modeling, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19942, https://doi.org/10.5194/egusphere-egu25-19942, 2025.