EGU22-10597, updated on 10 Jan 2024
https://doi.org/10.5194/egusphere-egu22-10597
EGU General Assembly 2022
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Semantic segmentation of historical images in Antarctica with neural networks

Felix Dahle1, Roderik Lindenbergh1, Julian Tanke2, and Bert Wouters1,3
Felix Dahle et al.
  • 1Geoscience and Remote Sensing, TU Delft, Delft, Netherlands
  • 2Computer Sciences, University of Bonn, Bonn, Germany
  • 3Marine and Atmospheric Research, University of Utrecht, Utrecht, Netherlands

The USGS digitized many historical photos of Antarctica which could provide useful insights into this region from before the satellite era. However, these images are merely scanned and do not contain semantic information, which makes it difficult to use or search this archive (for example to filter for cloudless images). Even though there are countless semantic segmentation methods, they are not working properly with these images. The images are only grayscale, have often a poor image quality (low contrast or newton’s rings) and do not have very distinct classes, for example snow/clouds (both white pixels) or rocks/water (both black pixels). Furthermore, especially for this archive, these images are not only top-down but can also be oblique.

We are training a machine-learning based network to apply semantic segmentation on these images even under these challenging conditions. The pixels of each image will be labelled into one of the six different classes: ice, snow, water, rocks, sky and clouds. No training data was available for these images, so that we needed to create it ourselves. The amount of training data is therefore limited due to the extensive amount of time required for labelling. With this training data, a U-Net was trained, which is a fully convolutional network that can work especially with fewer training images and still give precise results.

In its current state, this model is trained with 67 images, split in 80% training and 20% validation images. After around 6000 epochs (approx. 30h of training) the model converges and training is stopped. The model is evaluated on 8 randomly selected images that were not used during training or validation. These images contain all different classes and are challenging to segment due to quality flaws and similar looking classes. The model is able to segment the images with an accuracy of around 75%. Whereas some classes, like snow, sky, rocks and water can be recognized consistently, the classes ice and clouds are often confused with snow. However, the general semantic structure of the images can be recognized.

In order to improve the semantic segmentation, more training imagery is required to increase the variability of each class and prepare the model for more challenging scenes. This new training data will include both labelled images from the TMA archive and from other historical archives in order to increase the variability of classes even more. It should be checked if the quality of the model can be further improved by including metadata of the images as additional data sources.

How to cite: Dahle, F., Lindenbergh, R., Tanke, J., and Wouters, B.: Semantic segmentation of historical images in Antarctica with neural networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10597, https://doi.org/10.5194/egusphere-egu22-10597, 2022.

Displays

Display file