EGU26-19816, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-19816
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Monday, 04 May, 10:45–12:30 (CEST), Display time Monday, 04 May, 08:30–12:30
 
Hall X4, X4.57
An Inherently-Interpretable Approach to Uncover the Head Importance of Attention Networks
Ivica Obadic1, Luca Rigon2, and Xiaoxiang Zhu3
Ivica Obadic et al.
  • 1Technical University of Munich, Data Science in Earth Observation, Munich, Germany (ivica.obadic@tum.de)
  • 2Technical University of Munich, Data Science in Earth Observation, Munich, Germany (luca.rigon@tum.de)
  • 3Technical University of Munich, Data Science in Earth Observation, Munich, Germany (xiaoxiang.zhu@tum.de)

Attention-based deep learning models are becoming a ubiquitous approach for modeling the complex temporal dependencies in many vital Earth observation applications, such as agricultural monitoring. They typically consist of multiple attention heads, with each head containing attention weights that determine how temporal information is combined for the model's prediction. While analyzing the attention weights can provide insights into the model's workings, the existence of multiple heads makes it difficult to comprehend the extracted temporal information by the model. To overcome this issue, we propose an inherently interpretable approach that automatically weights the head importance during the model's forward pass. Our evaluation on the task of crop-type classification shows that the model maintains high accuracy while simplifying interpretation by highlighting only the most significant attention heads.

How to cite: Obadic, I., Rigon, L., and Zhu, X.: An Inherently-Interpretable Approach to Uncover the Head Importance of Attention Networks, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19816, https://doi.org/10.5194/egusphere-egu26-19816, 2026.