EGU26-20173, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-20173
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Thursday, 07 May, 08:30–10:15 (CEST), Display time Thursday, 07 May, 08:30–12:30
 
Hall X5, X5.211
Exploring Adversarial Attacks in AI Weather Models for Generation of High-resolution Tropical Cyclones
Marco Froelich and Sebastian Engelke
Marco Froelich and Sebastian Engelke
  • Research Institute for Statistics and Information Science, University of Geneva, Geneva, Switzerland

There has been recent interest in the advantage of differentiability of AI-weather models to enable direct computation of model sensitivities to initial conditions. In the field of machine learning, adversarial attacks leverage these sensitivities to influence the output of the prediction system by finding optimal initial condition perturbations. In weather forecasting, this methodology can be seen under two lenses: differentiable models are susceptible to malicious attacks aimed at distorting operational forecasts [1], while having access to sensitivities is an opportunity to further our understanding of real events through the generation of synthetic forecasts. Adversarial examples - perturbed initial conditions obtained from adversarial attacks - have been used in [2] to create even more extreme forecasts of a heatwave, providing a storyline approach to understanding black swan heatwave events. 

We further this effort by exploring adversarial attacks of tropical cyclone predictions at 0.25° resolution using Operational GraphCast. Although AI-weather models are known to improve tropical cyclone track predictions against numerical systems it remains challenging to forecast high intensities, particularly at high-resolution. Indeed, AI-weather models trained with MSE-type losses on reanalysis are known to suffer from 'blurred' forecasts due to the implicit down-weighing of small scale features. We find that while standard adversarial attacks of tropical cyclone forecasts are effective in controlling tropical cyclone tracks, they fail to reproduce realistic gradients of temperature, geopotential and wind fields, effectively worsening blurring effects. This is true also for attacks on the AMSE-finetuned Operational GraphCast model [3] which otherwise shows significant improvements in representing small scale features. We then borrow insights from the machine learning literature on the impact of the low-frequency bias of neural networks and its relationship to adversarial examples to improve this limitation and explore the capabilities of AI-weather models in global high-resolution tropical cyclone forecasting. 

 

References: 
[1] Imgrund, E., Eisenhofer, T., Rieck, K., 2025. Adversarial Observations in Weather Forecasting.
[2] Whittaker, T., Luca, A.D., 2025. Constructing Extreme Heatwave Storylines with Differentiable Climate Models.
[3] Subich, C., Husain, S.Z., Separovic, L., Yang, J., 2025. Fixing the Double Penalty in Data-Driven Weather Forecasting Through a Modified Spherical Harmonic Loss Function.

How to cite: Froelich, M. and Engelke, S.: Exploring Adversarial Attacks in AI Weather Models for Generation of High-resolution Tropical Cyclones, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20173, https://doi.org/10.5194/egusphere-egu26-20173, 2026.