EGU26-11475, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-11475
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Tuesday, 05 May, 17:20–17:30 (CEST)
 
Room -2.62
Comparison of Online Training Methods for Data-Driven Subgrid-Scale Parameterizations in Non-Differentiable Models
Maha Badri1,2, Brian Groenke2, Maximilian Gelbrecht1,2, and Niklas Boers1,2,3
Maha Badri et al.
  • 1Munich Climate Center and Earth System Modelling Group, Technical University of Munich, Munich, Germany
  • 2Potsdam Institute for Climate Impact Research, Potsdam, Germany
  • 3University of Exeter, Exeter, UK

Subgrid-scale (SGS) parameterizations remain a leading source of uncertainty in weather and climate models, where they represent the effects of unresolved processes occurring at scales smaller than the model’s grid resolution on the resolved fields. Similar closure problems arise in computational fluid dynamics (CFD), where turbulence models are needed to represent the impact of unresolved scales on the resolved flow. Despite recent progress toward differentiable, hybrid climate models enabled by automatic differentiation, most operational Earth system models (ESMs) remain effectively non-differentiable, limiting systematic online calibration and training.

While data-driven closures trained offline can perform well a priori, their performance often deteriorates a posteriori, once coupled to the solver because the coupled setting introduces feedbacks that are absent during offline training. In this study, we treat a controlled CFD turbulence setting as a benchmark for climate-relevant SGS learning and compare two classes of online training strategies for data-driven closures in non-differentiable models: (i) gradient-free ensemble Kalman inversion (EKI), leveraging the robustness and parallelism of ensemble-based inverse methods, and (ii) gradient-based optimization enabled by a learned differentiable emulator. For the emulator, we train a fast neural ODE surrogate of the forward model dynamics that preserves its structure and is differentiable by construction, enabling gradient-based training without modifying the original solver. We then evaluate both approaches using metrics such as accuracy, computational cost, and scalability.

How to cite: Badri, M., Groenke, B., Gelbrecht, M., and Boers, N.: Comparison of Online Training Methods for Data-Driven Subgrid-Scale Parameterizations in Non-Differentiable Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11475, https://doi.org/10.5194/egusphere-egu26-11475, 2026.