EGU26-2346, updated on 13 Mar 2026
https://doi.org/10.5194/egusphere-egu26-2346
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Wednesday, 06 May, 08:35–08:45 (CEST)
 
Room 2.24
Efficient Gradient-Approximation Methods for Online Learning in Hybrid Neural–Physical Ocean Models
Emilio González Zamora1, Said Ouala1, and Pierre Tandeo1,2
Emilio González Zamora et al.
  • 1IMT Atlantique, Lab-STICC. Brest, France
  • 2RIKEN Center for Computational Science, Kobe, Japan

Hybrid modeling integrates data-driven Machine Learning (ML) components, such as Neural Networks (NN), into physics-based numerical models to improve the accuracy, stability, and adaptability of dynamical simulations. Rather than replacing established physical laws, hybrid models augment them by learning corrections that compensate for unresolved processes, reduce systematic biases, or dynamically calibrate uncertain parameters.

In oceanic and atmospheric numerical models, unresolved dynamics are represented through sub-grid-scale (SGS) parameterizations coupled to the Navier–Stokes equations. As these parameterizations constitute a major source of uncertainty, recent work has increasingly explored Artificial Intelligence (AI) to improve their modeling and constraint. A particularly promising strategy is online learning, in which the AI model is embedded within the numerical solver and trained while interacting with the evolving system dynamics. This setup allows the model to learn temporal dependencies across multiple solver steps and to optimize long-term behavior. Although online learning has demonstrated improved forecast skill and stability over long horizons compared to the more widely used offline learning strategy, its application to high-dimensional ocean models is limited by two key challenges: the requirement for fully differentiable solvers and the high computational and memory costs associated with backpropagation through long trajectories.

To overcome these limitations, we introduce a new family of gradient-approximation methods that selectively simplify intermediate Jacobians in the backpropagation chain. The resulting gradients closely approximate the exact full gradients over long trajectories, preserving the dominant sensitivities required for effective online learning and substantially reducing computational and memory overhead.

We evaluate the proposed methods using two case studies of increasing complexity. We first consider a hybrid neural–Lorenz-63 model in which an AI component compensates for missing dynamics. The framework is then extended to a semi-realistic hybrid quasi-geostrophic model of the Northwestern Mediterranean Sea, demonstrating two complementary enhancement strategies: the calibration of a biased physical parameter (bottom drag) and a NN-based correction of bottom-layer momentum tendencies. Together, these experiments show that our Jacobian-approximation strategies enable stable and efficient online learning across both low-dimensional chaotic systems and high-dimensional ocean models. Although our configurations remain simpler than fully operational ocean models, our results provide a foundation for scaling online learning to realistic ocean applications and, ultimately, for integrating AI-based corrections into next-generation forecasting systems.

How to cite: González Zamora, E., Ouala, S., and Tandeo, P.: Efficient Gradient-Approximation Methods for Online Learning in Hybrid Neural–Physical Ocean Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2346, https://doi.org/10.5194/egusphere-egu26-2346, 2026.