EGU24-19198, updated on 11 Mar 2024
https://doi.org/10.5194/egusphere-egu24-19198
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Learning-Based Hyperspectral Image Compression Using A Spatio-Spectral Approach

Niklas Sprengel1, Martin Hermann Paul Fuchs2, and Prof. Begüm Demir3,4
Niklas Sprengel et al.
  • 1TU Berlin, Faculty IV – Electrical Engineering and Computer Science, Remote Sensing Image Analysis Group, Berlin, Germany (niklasspreng@gmx.de)
  • 2TU Berlin, Faculty IV – Electrical Engineering and Computer Science, Remote Sensing Image Analysis Group, Berlin, Germany (m.fuchs@tu-berlin.de)
  • 3TU Berlin, Faculty IV – Electrical Engineering and Computer Science, Remote Sensing Image Analysis Group, Berlin, Germany (demir@tu-berlin.de)
  • 4BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany

Advances in hyperspectral imaging have led to a significant increase in the vol-
ume of hyperspectral image archives. Therefore, the development of efficient and
effective hyperspectral image compression methods is an important research topic in
remote sensing. Recent studies show that learning-based compression methods are
able to preserve the reconstruction quality of images at lower bitrates compared to
traditional methods [1]. Existing learning-based image compression methods usu-
ally employ spatial compression per image band or for all bands jointly. However,
hyperspectral images contain a high amount of spectral correlations which neces-
sitates more complex compression architectures that can reduce both spatial and
spectral correlations for a more efficient compression. To address this problem, we
propose a novel Spatio-Spectral Compression Network (S2C-Net).
S2C-Net is a flexible architecture to perform hyperspectral image compression,
exploiting both spatial and spectral dependencies of hyperspectral images. It com-
bines different spectral and spatial autoencoders into a joint model. To this end, a
learning-based pixel-wise spectral autoencoder is initially pre-trained. Then, a spa-
tial autoencoder network is added into the bottleneck of the spectral autoencoder for
further compression of the spatial correlations. This is done by applying the spatial
autoencoder to the output of the spectral encoder and then applying the spectral
decoder to the output of the spatial autoencoder. The model is then trained using
a novel mixed loss function that combines the loss of the spectral and the spatial
model. Since the spatial model is applied on the output of the spectral encoder,
the spatial compression methods that are optimised for 2D image compression can
be used in S2C-Net in the context of hyperspectral image compression.
In the experiments, we have evaluated our S2C-Net on HySpecNet-11k that is
a large-scale hyperspectral image dataset [2]. Experimental results show that S2C-
Net outperforms both spectral and spatial state of the art compression methods for
bitrates lower than 1 bit per pixel per channel (bpppc). Specifically, it can achieve
lower distortion for similar compression rates and offers the possibility to reach
much higher compression rates with only slightly reduced reconstruction quality.

References
[1] F. Zhang, C. Chen, and Y. Wan, “A survey on hyperspectral remote sensing
image compression,” in IEEE IGARSS, 2023, pp. 7400–7403..
[2] M. H. P. Fuchs and B. Demir, “Hyspecnet-11k: A large-scale hyperspectral
dataset for benchmarking learning-based hyperspectral image compression meth-
ods,” in IEEE IGARSS, 2023, pp. 1779–1782.

How to cite: Sprengel, N., Fuchs, M. H. P., and Demir, P. B.: Learning-Based Hyperspectral Image Compression Using A Spatio-Spectral Approach, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19198, https://doi.org/10.5194/egusphere-egu24-19198, 2024.