EGU24-3954, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-3954
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Collocation of multi-source satellite imagery for ship detection based on Deep Learning models

Tran-Vu La1, Minh-Tan Pham2, and Marco Chini1
Tran-Vu La et al.
  • 1Luxembourg Institute of Science and Technology (LIST), Environmental Research and Innovation (ERIN), Esch-sur-Alzette, Luxembourg (tran-vu.la@list.lu)
  • 2Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université Bretagne Sud (UBS), Vannes, France (minh-tan.pham@irisa.fr)

The development of the world economy in recent years has been accompanied by a significant increase in maritime traffic. Accordingly, numerous ship collision incidents, especially in dense maritime traffic zones, have been reported with damage, including oil spills, transportation interruption, etc. To improve maritime surveillance and minimize incidents over the seas, satellite imagery provided by synthetic aperture radar (SAR) and optical sensors has become one of the most effective and economical solutions in recent years. Indeed, both SAR and optical images can be used to detect vessels of different sizes and categories, thanks to their high spatial resolutions and wide swath.

To process a mass of satellite data, Deep Learning (DL) has become an indispensable solution to detect ships with a high accuracy rate. However, the DL models require time and effort for implementation, especially for training, validating, and testing with big datasets. This issue is more significant if we use different satellite imagery datasets for ship detection because data preparation tasks will be multiplied. Therefore, this paper aims to investigate various approaches for applying the DL models trained and tested on different datasets with various spatial resolution and radiometric features. Concretely, we focus on two aspects of ship detection from multi-source satellite imagery that have not been attentively discussed in the literature. First, we compare the performance of DL models trained on one HR or MR dataset and those trained on the combined HR and MR datasets. Second, we compare the performance of DL models trained on an optical or SAR dataset and tested on another. Likewise, we evaluate the performance of DL models trained on the combined SAR and optical dataset. The objective of this work is to answer a practical question of ship detection in maritime surveillance, especially for emergency cases if we can directly apply the DL models trained on one dataset to others having differences in spatial resolution and radiometric features without the supplementary steps such as data preparation and DL models retraining.

When dealing with a limited number of training images, the performance of DL models via the approaches proposed in this study was satisfactory. They could improve 5–20% of average precision, depending on the optical images tested. Likewise, DL models trained on the combined optical and radar dataset could be applied to both optical and radar images. Our experiments showed that the models trained on an optical dataset could be used for radar images, while those trained on a radar dataset offered very poor scores when applied to optical images.

How to cite: La, T.-V., Pham, M.-T., and Chini, M.: Collocation of multi-source satellite imagery for ship detection based on Deep Learning models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3954, https://doi.org/10.5194/egusphere-egu24-3954, 2024.