EGU22-2179
https://doi.org/10.5194/egusphere-egu22-2179
EGU General Assembly 2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

Partially interpretable neural networks for high-dimensional extreme quantile regression: With application to wildfires within the Mediterranean Basin

Jordan Richards1, Raphaël Huser1, Emanuele Bevacqua2, and Jakob Zscheischler2
Jordan Richards et al.
  • 1CEMSE division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia (jordan.richards@kaust.edu.sa)
  • 2Department of Computational Hydrosystems, Helmholtz Centre for Environmental Research, Leipzig, Germany
Quantile regression is a particularly powerful tool for modelling environmental data which exhibits spatio-temporal non-stationarity in its marginal behaviour. If our interest lies in quantifying risk associated with particularly extreme or rare weather events, we may want to estimate conditional quantiles that are outside the range of observable data; in these cases, it is practical to describe the data using some parametric extreme value model with its parameters represented as functions of predictor variables. Classical approaches for parametric extreme quantile regression use linear or additive relationships, and such approaches suffer in either their predictive capabilities or computational efficiency in high-dimensions. 
 
Neural networks can capture complex non-linear relationships between variables and scale well to high-dimensional predictor sets. Whilst they have been successfully applied in the context of fitting extreme value models, statisticians may choose to forego the use of neural networks as a result of their “black box" nature; although they facilitate highly accurate prediction, it is difficult to do statistical inference with neural networks as their outputs cannot readily be interpreted. Inspired by the recent focus in machine learning literature on “explainable AI”,  we propose a framework for performing extreme quantile regression using partially interpretable neural networks. Distribution parameters are represented as functions of predictors with three main components; a linear function, an additive function and a neural network that are applied separately to complementary subsets of predictors. The output from the linear and additive components is interpreted, whilst the neural network component contributes to the high prediction accuracy of our method.
We use our approach to estimate extreme quantiles and occurrence probabilities for wildfires occurring within a large spatial domain that encompasses the entirety of the Mediterranean Basin.
 

How to cite: Richards, J., Huser, R., Bevacqua, E., and Zscheischler, J.: Partially interpretable neural networks for high-dimensional extreme quantile regression: With application to wildfires within the Mediterranean Basin, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2179, https://doi.org/10.5194/egusphere-egu22-2179, 2022.