EGU26-17369, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-17369
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 08 May, 14:06–14:09 (CEST)
 
vPoster spot 3
Poster | Friday, 08 May, 16:15–18:00 (CEST), Display time Friday, 08 May, 14:00–18:00
 
vPoster Discussion, vP.52
Deep Reinforcement Learning for Operational Coastal Emergency Response With AI Agent Orchestration and Human Oversight
Marcello Sano2,1,3, Davide Ferrario1, Samuele Casagrande1, Sebastiano Vascon2,4, Silvia Torresan1,2, and Andrea Critto2,1
Marcello Sano et al.
  • 1CMCC Foundation - Euro-Mediterranean Center on Climate Change, Venice, Italy
  • 2Department of Environmental Sciences, Informatics and Statistics, Ca' Foscari University of Venice, Venice, Italy
  • 3Griffith University, Australia
  • 4European Center for Living Technology, Venice, Italy

Despite urgent needs for adaptive coastal risk management, operational systems still rely heavily on static triggers and fragmented information that overlook interactions between evolving hazards and response actions. Building on a completed game-like deep reinforcement learning (DRL) testbed, we present a pathway toward operational coastal decision support, progressing toward real-world case studies such as Venice in Italy and South East Queensland in Australia.

In the first phase, we developed a controllable game-like scenario that captures the essential components of coastal emergency management: a simplified representation of coastal geography and built assets, dynamic multi-hazard drivers evolving over time, and an action space reflecting plausible operational interventions under constraints. Using this environment, we demonstrated that a PPO-based DRL agent can learn adaptive policies through repeated interactions, as we gained practical lessons on state representation, constraint handling, and reward design for safety-critical objectives.

We then focus on the transition from simulation to real-world settings by outlining a set of alternative state-representation options, spanning classical dimensionality reduction and feature engineering through to learned latent-state methods. We report results for selected approaches, using autoencoders as the primary entry point to compress high-dimensional spatio-temporal hazard and exposure information into compact variables that retain decision-relevant structure while improving training efficiency and robustness. This provides a practical interface to real-world, digital-twin style environments built from geospatial and socio-economic data and forecast inputs.

Finally, we propose an orchestration layer to reduce the risk of AI-driven decision making and improve usability. A large language model (LLM) ingests DRL outputs and contextualises recommendations via retrieval-augmented generation over plans, studies, and standard operating procedures, together with API calls to dynamic data feeds. The proposed orchestration layer is intended to translate DRL outputs into human-readable and auditable decision support for a human-in-the-loop operator, grounding recommendations in retrieved local documentation and live data feeds to strengthen transparency, uncertainty communication, and operational trust.

How to cite: Sano, M., Ferrario, D., Casagrande, S., Vascon, S., Torresan, S., and Critto, A.: Deep Reinforcement Learning for Operational Coastal Emergency Response With AI Agent Orchestration and Human Oversight, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17369, https://doi.org/10.5194/egusphere-egu26-17369, 2026.