EGU26-14866, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-14866
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Tuesday, 05 May, 10:50–11:10 (CEST)
 
Room 3.16/17
Ilya M. Sobol’ (1926–2025): A Tribute and Overview of the Foundations of Global Sensitivity Analysis, Recent Advances, and Extensions toward Explainable Artificial Intelligence
Saman Razavi1,2, Banamali Panigrahi1, and Hamed Abbasnezhad1
Saman Razavi et al.
  • 1School of Environment and Sustainability, and Global Institute for Water Security, University of Saskatchewan, Saskatoon, Canada
  • 2School of Civil and Environmental Engineering, University of New South Wales, Sydney, Australia

The recent passing of Ilya M. Sobol’ marks the loss of one of the most influential figures in the development of global sensitivity analysis (GSA). Sobol’s work fundamentally shaped how uncertainty in model outputs is attributed to uncertain inputs, providing a rigorous and widely adopted framework that has become a cornerstone of uncertainty and sensitivity analysis across the Earth, environmental, and hydrological sciences.

This contribution first offers a brief tribute to Sobol’s scientific legacy and a concise review of the conceptual foundations of GSA. We revisit the primary question that motivated Sobol’s work—How much of the uncertainty in the model output is caused by each uncertain input?—and discuss why this question remains central for the analysis of complex, nonlinear, and high-dimensional models. We also emphasize that the principles underpinning GSA are increasingly relevant in the context of artificial intelligence (AI), where complex and high-dimensional models demand robust and transparent methods for attributing influence and uncertainty.

Building on this foundation, we highlight recent developments around variogram analysis of response surfaces (VARS), and in particular X-VARS, which extend GSA concepts to settings relevant for explainable AI (XAI). By leveraging paired perturbations and scale-explicit analysis, X-VARS enables efficient and robust attribution of uncertainty and influence in complex models, making GSA practical for modern AI-driven applications. Compared to established explainability methods such as SHAP, X-VARS offers substantial gains in computational efficiency while providing diagnostically richer insight into nonlinearity, interactions, and scale dependence.

We conclude by highlighting some key challenges and opportunities for the next generation of GSA methods in complex modelling and AI applications.

How to cite: Razavi, S., Panigrahi, B., and Abbasnezhad, H.: Ilya M. Sobol’ (1926–2025): A Tribute and Overview of the Foundations of Global Sensitivity Analysis, Recent Advances, and Extensions toward Explainable Artificial Intelligence, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14866, https://doi.org/10.5194/egusphere-egu26-14866, 2026.