As artificial intelligence (AI) systems transition from research prototypes to operational tools in Earth system science and forecasting, establishing confidence and trust in their predictions becomes increasingly critical. Although the inputs and outputs of AI models are observable, their internal decision-making processes are often highly complex and difficult for humans to interpret, leading to their frequent characterisation as “black boxes” which are potentially untrustworthy.
In this work, we examine a range of explainable artificial intelligence (XAI) techniques designed to provide insight into AI model predictions. Many of these methods have been developed primarily with classification tasks in mind, raising important questions about their suitability for the regression-based problems that dominate geoscientific applications. We investigate the application of XAI methods to a machine learning derived emulator of the Lorenz ’63 system (an archetypal chaotic dynamical model) and review existing case studies that apply XAI in regression settings relevant to Earth sciences.
We highlight key challenges and limitations of current, general-purpose XAI approaches when applied to chaotic, continuous, high-dimensional, and physically constrained systems. Finally, we identify gaps in existing methodologies and discuss future directions for developing XAI techniques better aligned with the context-specific needs of regression problems in geoscientific modelling and forecasting.
How to cite: Higgs, I., Hunt, K., Jones, T., and Ellis, A.-L.: Evaluating explainable AI Methods for geoscientific regression: insights from applications and a chaotic toy model, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-23208, https://doi.org/10.5194/egusphere-egu26-23208, 2026.