EGU2020-19933, updated on 12 Jun 2020
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

Data Fusion on the CANDELA Cloud Platform

Wei Yao1, Octavian Dumitru1, Jose Lorenzo2, and Mihai Datcu1
Wei Yao et al.
  • 1German Aerospace Center, Remote Sensing Technology Institute, Germany (
  • 2ATOS SPAIN SA, Madrid, Spain (

This abstract describes the Data Fusion tool of the Horizon 2020 CANDELA project. Here, Sentinel-1 (synthetic aperture radar) and Sentinel-2 (multispectral) satellite images are fused at feature level. This fusion is made by extracting the features from each type of image; then these features are combined in a new block within the Data Model Generation sub-module of the Data Fusion system.

The corresponding tool has already been integrated with the CANDELA cloud platform: its Data Model component on the platform is acting as backend, and the user interaction component on the local user machine as frontend. There are four main sub-modules: Data Model Generation for Data Fusion (DMG-DF), DataBase Management System (DBMS), Image Search and Semantic Annotation (ISSA), and multi-knowledge and Query (QE). The DMG-DF and DBMS sub-modules have been dockerized and deployed on the CANDELA platform. The ISSA and QE sub-modules require user inputs for their interactive interfaces. They can be started as a standard Graphical User Interface (GUI) tool which is linked directly to the database on the platform.

Before using the Data Fusion tool, users have to prepare the already co-registered Sentinel-1 and Sentinel-2 products as inputs. The S1tiling service provided on the platform is able to cut out the overlapping Sentinel-1 area based on Sentinel-2 tile IDs.

The pipeline of the Data Fusion tool starts from the DMG-DF process on the platform, and the data will be transferred via Internet; then local end users can perform semantic annotations. The annotations will be ingested into the database on the platform via Internet.

The Data Fusion process consists of three steps:

  • On the platform, launch a Jupyter notebook for Python, and start the Data Model Generation for Data Fusion to process the prepared Sentinel-1 and Sentinel-2 products which cover the same area;
  • On the local user machine, by clicking the Query button of the GUI, users can get access to the remote database, make image search and queries, and perform semantic annotations by loading quick-look images of processed Sentinel-1 and Sentinel-2 products via Internet. Feature fusion and image quick-look pairing are performed at runtime. The fused features and paired quick-looks help obtain better semantic annotations. When clicking on another ingestion button, the annotations are ingested into the database on the platform;
  • On the platform, launch a Jupyter notebook for Python, and the annotations and the processed product metadata can be searched and queried.

Our preliminary validation results are made based on visual analysis, by comparing the obtained classification maps with already available CORINE land cover maps. In general, our fused results generate more complete classification maps which contain more classes.

How to cite: Yao, W., Dumitru, O., Lorenzo, J., and Datcu, M.: Data Fusion on the CANDELA Cloud Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19933,, 2020


Display file