- University of Sheffield, Bioscience, United Kingdom of Great Britain – England, Scotland, Wales (thughes3@sheffield.ac.uk)
Dynamics within ecosystems vary in their transience and persistence. While some environmental drivers exert near-instantaneous control on productivity, others exhibit hysteretic responses in which the influence of past conditions persists beyond the moment of exposure. For instance, In gross primary productivity (GPP), incoming shortwave radiation predominantly regulates photosynthesis on a diurnal timescale, whereas temperature, vapour pressure deficit, and soil moisture may exert delayed or cumulative effects associated with acclimation, stress accumulation, and recovery. Accurately representing these temporal dependencies is essential for credible ecosystem modelling, yet remains challenging for both statistical and machine-learning approaches.
Sequential models, such as Long Short-Term Memory (LSTM) networks, offer a flexible means of learning temporal dependencies directly from observations, without requiring predefined assumptions about lag structure. However, their opaque internal representations raise concerns regarding scientific interpretability and trust, limiting their use beyond prediction. Explainable AI (XAI) methods provide a method of interrogating these learned representations, enabling an assessment of how, when, and for how long different drivers influence model outputs.
Here, we investigate the use of Integrated Gradients (IG) in characterising the temporal structure of driver influence learned by LSTM models trained on combined meteorological and vegetation state datasets. Attribution is examined across input sequences to distinguish short-lived from persistent controls on GPP, and to assess how these patterns vary across environmental conditions and seasonal contexts. This analysis is contrasted with hypothesis-driven "exposure-lag" representations derived from Distributed Lag Non-linear Models (DLNM), highlighting differences in how temporal influence is represented, constrained, and interpreted.
Rather than treating explainability as a post-hoc validation step, this work demonstrates how XAI can function as a scientific diagnostic tool, enabling interrogation of black-box models and supporting exploratory discovery of emergent temporal behaviour. The results illustrate how explainable ML can enhance trust in data-driven ecosystem modelling while offering complementary insights to traditional confirmatory approaches, particularly in complex, high-dimensional settings where temporal dependencies are unknown or context-dependent.
How to cite: Hughes, T.: From Prediction to Understanding: Using Explainable AI to Reveal Temporal Drivers of Ecosystem Productivity, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19181, https://doi.org/10.5194/egusphere-egu26-19181, 2026.