EGU22-11787
https://doi.org/10.5194/egusphere-egu22-11787
EGU General Assembly 2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

Explainable deep learning for wildfire danger estimation

Michele Ronco1, Ioannis Prapas2, Spyros Kondylatos2, Ioannis Papoutsis2, Gustau Camps-Valls1, Miguel-Ángel Fernández-Torres1, Maria Piles Guillem1, and Nuno Carvalhais3
Michele Ronco et al.
  • 1Image Processing Laboratory, University of Valencia, Spain (michele.ronco@uv.es)
  • 2IAASARS, National Observatory of Athens, Greece
  • 3Max Planck Institute for Biogeochemistry, Germany

Deep learning models have been remarkably successful in a number of different fields, yet their application to disaster management is obstructed by the lack of transparency and trust which characterises artificial neural networks. This is particularly relevant in the field of Earth sciences where fitting is only a tiny part of the problem, and process understanding becomes more relevant [1,2]. In this regard, plenty of eXplainable Artificial Intelligence (XAI) algorithms have been proposed in the literature over the past few years [3]. We suggest that combining saliency maps with interpretable approximations, such as LIME, is useful to extract complementary insights and reach robust explanations. We address the problem of wildfire forecasting for which interpreting the model's predictions is of crucial importance to put into action effective mitigation strategies. Daily risk maps have been obtained by training a convolutional LSTM with ten years of data of spatio-temporal features, including weather variables, remote sensing indices and static layers for land characteristics [4]. We show how the usage of XAI allows us to interpret the predicted fire danger, thereby shortening the gap between black-box approaches and disaster management.

 

[1] Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences

Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors)

Wiley \& Sons 2021

[2] Deep learning and process understanding for data-driven Earth System Science

Reichstein, M. and Camps-Valls, G. and Stevens, B. and Denzler, J. and Carvalhais, N. and Jung, M. and Prabhat

Nature 566 :195-204, 2019

[3] Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

 Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller (Editors)

LNCS, volume 11700, Springer 

[4] Deep Learning Methods for Daily Wildfire Danger Forecasting

Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Michele Ronco, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, Nuno Carvalhais

arXiv: 2111.02736


 

How to cite: Ronco, M., Prapas, I., Kondylatos, S., Papoutsis, I., Camps-Valls, G., Fernández-Torres, M.-Á., Piles Guillem, M., and Carvalhais, N.: Explainable deep learning for wildfire danger estimation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11787, https://doi.org/10.5194/egusphere-egu22-11787, 2022.