- Queens College City University of New York, Philosophy, United States of America (roloughlin@qc.cuny.edu)
AI-driven climate models are often criticized as “black boxes,” raising concerns about their credibility for scientific and policy-relevant decision making. Explainable artificial intelligence (XAI) is frequently proposed as a solution, focusing on identifying systematic relationships between model input and output data to characterize model behavior. This paper builds on prior work arguing that trust in both dynamical and AI models depends not on such input-output characterizations alone but on scientists’ component-level understanding of their models (O’Loughlin et al. 2025). Component-level understanding refers to scientists’ ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well.
We argue that component-level understanding plays a distinctive role in establishing credibility because it expands scientists’ ability to answer a wider range of what-if-things-had-been-different questions. For example, when a model exhibits unexpected sensitivity or instability, component-level understanding enables scientists to ask (and design targeted tests to determine) whether the behavior would persist if a specific parameterization, architectural module, or physically informed constraint were altered. We see examples of this in CMIP, e.g., diagnosing the effect of a cloud microphysics scheme on a model’s climate sensitivity (Gettelman et al. 2019; Zelinka et al. 2020) and in AI-driven climate science as well, e.g., attributing model instability to particular architectural choices such as unconstrained neural network layers or inappropriate spectral representations (e.g., Beucler et al., 2019; Bonev et al., 2023). By linking model behavior to specific components or architectural features, scientists are better positioned to diagnose misbehavior, explore counterfactual scenarios, and explain why a model behaves as it does under varying conditions. This explanatory capacity enables scientists to establish credibility with decision-makers by demonstrating when, why, and under what conditions AI-driven climate models can be trusted.
Such explanations will inevitably be incomplete and context-dependent, particularly in complex models whose components interact in nonlinear ways and are often intended to represent emergent climate phenomena. Nevertheless, we argue that credibility is built through explanatory practices involving model successes and failures alike. We conclude by outlining several pathways for strengthening component-level understanding in AI-driven climate science: scientists may develop such understanding themselves; work in close collaboration with AI model builders and domain experts; design model intercomparison projects that explicitly support component-level diagnosis; or adopt evaluation and benchmarking practices that prioritize explanatory and counterfactual insight alongside predictive performance. On this view, establishing credibility requires organizing scientific work so that explanation remains a central and achievable activity.
References
Bonev, B., et al.: Spherical Fourier Neural Operators…, arXiv [preprint], https://doi.org/10.48550/arXiv.2306.03838. 2023.
Beucler, T., et al. Enforcing analytic constraints in neural networks…. Physical review letters, 126(9), p.098302. 2021.
Gettelman, A. et al. High Climate Sensitivity in the Community Earth System Model Version 2 (CESM2), Geophys. Res. Lett., 46, 8329–8337, https://doi.org/10.1029/2019GL083978, 2019
O'Loughlin RJ. Moving beyond post hoc explainable artificial intelligence… https://doi.org/10.5194/gmd-18-787-2025 2025
Zelinka, M. D., et al. Causes of Higher Climate Sensitivity in CMIP6 Models, Geophys. Res. Lett., 47, e2019GL085782, https://doi.org/10.1029/2019GL085782, 2020
How to cite: O'Loughlin, R.: Earning Credibility in AI-Driven Climate Science: The Role of Component-Level Understanding, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3879, https://doi.org/10.5194/egusphere-egu26-3879, 2026.