EGU26-7643, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-7643
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Wednesday, 06 May, 14:00–15:45 (CEST), Display time Wednesday, 06 May, 14:00–18:00
 
Hall X5, X5.213
Estimating Most Probable AMOC Collapse and Recovery Pathways Using Deep Reinforcement Learning
Francesco Guardamagna1 and Henk Dijkstra2
Francesco Guardamagna and Henk Dijkstra
  • 1Utrecht University, IMAU, Utrecht, Netherlands (f.guardamagna@uu.nl)
  • 2Utrecht University, IMAU, Utrecht, Netherlands h.a.Dijkstra@uu.nl

Growing evidence suggests that the present-day Atlantic Meridional Overturning Circulation (AMOC) operates in a bistable regime and may transition to a weakened or collapsed (“OFF”) state under climate change forcing, with severe global climate impacts. In addition to deterministic forcing, stochastic variability can induce noise-induced transitions between stable AMOC states. Quantifying the probability and pathways of such transitions is therefore critical.

Previous work (Soons et al., 2024) applied Large Deviation Theory (LDT) to a stochastic box ocean model (Wood et al., 2019) to estimate the most probable pathways for noise-induced AMOC collapse and recovery. While effective, this approach requires explicit knowledge of system properties, such as the Jacobian, limiting its applicability to higher-dimensional, more complex climate models.

Here, we adapt a recently proposed deep reinforcement learning framework (Lin et al., 2025) to compute most probable transition pathways in stochastic dynamical systems without prior knowledge of the governing equations. Applied to the stochastic box ocean model, the method robustly identifies physically consistent collapse and recovery pathways, comparable to those obtained using LDT. Finally, we demonstrate the feasibility of this framework in a more complex ocean model.

How to cite: Guardamagna, F. and Dijkstra, H.: Estimating Most Probable AMOC Collapse and Recovery Pathways Using Deep Reinforcement Learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7643, https://doi.org/10.5194/egusphere-egu26-7643, 2026.