EGU25-5727, updated on 14 Mar 2025
https://doi.org/10.5194/egusphere-egu25-5727
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Tuesday, 29 Apr, 16:15–18:00 (CEST), Display time Tuesday, 29 Apr, 14:00–18:00
 
Hall A, A.29
Using Explainable Artifical Intelligence (XAI) to Analyze the Behavior of Global Water Models
Lea Faber1, Karoline Wiesner2, Ting Tang3, Yoshihide Wada4, and Thorsten Wagener5
Lea Faber et al.
  • 1Institute of Physics and Astronomy, University of Potsdam, Potsdam, Germany (lefaber@uni-potsdam.de)
  • 2Institute of Physics and Astronomy, University of Potsdam, Potsdam, Germany (karoline.wiesner@uni-potsdam.de)
  • 3Biological and Environmental Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Kingdom of Saudi Arabia (ting.tang@kaust.edu.sa)
  • 4Biological and Environmental Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal, Kingdom of Saudi Arabia (yoshihide.wada@kaust.edu.sa)
  • 5Institute of Environmental Science and Geography, University of Potsdam, Potsdam, Germany (thorsten.wagener@uni-potsdam.de)

Model Intercomparison Projects in the Earth Sciences have shown, that the outputs of
Earth System Models often show large variations and can therefore give quite different results,
with no single model consistently outperforming others. Examples include Global Water
Models (GWMs), as well as Global Climate Models (GCMs). The high computational costs
of running such models make comprehensive statistical analyses challenging, a common issue
with many complex models today. Machine learning models have become popular surrogates
of slow process-based models, due to their computational speed, at least once trained. This
speed makes it possible to use techniques from Explainable AI (XAI) to analyze the behavior
of the surrogate model.
Here, we analyze long-term averages of the GWM ’Community Water Model’ (CWatM)
for different parts of the global domain for actual evapotranspiration Ea, total runoff Q and
groundwater recharge R. We train an artificial neural network on the model’s input and output
data and use three different strategies to assess the importance of input data: LassoNet for sub-
set selection and feature ranking, along with Sobol’ indices and DeepSHAP for interpretability.
Our results show that subset selection can effectively reduce model complexity before XAI
analysis. For some hydrological domains the number of relevant input
variables for a chosen output reduces to less then 15 variables out of 98 model inputs, while
others remain more complex, requiring many variables for performances with R2 > 0, 8.

How to cite: Faber, L., Wiesner, K., Tang, T., Wada, Y., and Wagener, T.: Using Explainable Artifical Intelligence (XAI) to Analyze the Behavior of Global Water Models, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5727, https://doi.org/10.5194/egusphere-egu25-5727, 2025.