- Duke University, Civil and Environmental Engineering, Durham, United States of America
A pressing problem exacerbated by climate change is the inability to prepare for extreme climate and weather events due to the limited historical record of observed extremes. While crucial for risk assessment and informed policy-making, a better representation of the distribution of "feasible" outcomes remains largely uncertain, with predictions ranging at variously defined confidence levels that remain sensitive to the choice of metrics and physical assumptions. This question naturally lends itself to investigating how we can engender plausible realizations of extreme events, and thereby allow for mitigation efforts, before communities are forced to confront destructive realities. We present a time-conditioned generative framework based on a computer-vision-aided diffusion model trained on 1km $\times$ 1km precipitation fields and their trajectories over time. The output of this model is n future potential realizations of possible storm events that may unfold over the San Jacinto river basin in the south coast of Texas.
Beyond unconditional sampling, we introduce control variables that make generation decision-relevant: the model is trained to be conditional on a (duration, intensity) pair, enabling users to request ensembles spanning targeted severity regimes (e.g., short–extreme vs. long–moderate) while preserving realistic spatiotemporal structure. This yields a family of distributions over storm trajectories indexed by interpretable controls, allowing systematic stress testing of infrastructure and emergency-response plans under plausible but high-impact scenarios.
We separate the evaluation of our approach into two complementary perspectives: (i) distribution matching for in-sample generations, and (ii) physics-based alignment with storm-based properties for out-of-sample generations. Spatiotemporal structure of storms is also benchmarked against strong baselines like the analog ensemble method, quantifying the performance of our model to realistically capture intense rainfalls. To extract evolving storm geometries, we employ a kNN-based (k-nearest neighbors) computer-vision algorithm that dynamically identifies storm shapes across time steps. Due to the probabilistic nature of diffusion models, more comprehensive envelopes of the storm intensity and trajectory can be obtained for uncertainty quantification purposes.
Finally, we introduce a metric that jointly measures physical plausibility through features like intensity–duration structure and scaling, as well as novelty relative to the raw training data. This metric works by penalizing overfitting patterns while rewarding those that respect feasible dynamics, allowing us to define a principled way to compare generative models for extremes. Therefore, we can determine not only how realistic our generated storms are, but also how much physical diversity they contribute beyond the observed data. We present an open evaluation suite for controllable storm generation, including storm-tracking, intensity–duration diagnostics, and physical-novelty scoring.
How to cite: Tsao, V., Zaniolo, M., and Veveakis, M.: Synthetic Physics-Aware Storm Generation via Diffusion Models for Risk Analysis of Catastrophic Events, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5928, https://doi.org/10.5194/egusphere-egu26-5928, 2026.