ESSI4.4 | Interdomain digital services for integrated use of environmental data
Orals |
Fri, 16:15
Fri, 10:45
Fri, 14:00
EDI
Interdomain digital services for integrated use of environmental data
Convener: Alessandro Rizzo | Co-conveners: Maria-Luisa Chiusano, Christelle Pierkot, Marie JosséECSECS, Jérôme Détoc
Orals
| Fri, 02 May, 16:15–18:00 (CEST)
 
Room -2.92
Posters on site
| Attendance Fri, 02 May, 10:45–12:30 (CEST) | Display Fri, 02 May, 08:30–12:30
 
Hall X4
Posters virtual
| Attendance Fri, 02 May, 14:00–15:45 (CEST) | Display Fri, 02 May, 08:30–18:00
 
vPoster spot 4
Orals |
Fri, 16:15
Fri, 10:45
Fri, 14:00

Orals: Fri, 2 May | Room -2.92

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Alessandro Rizzo, Maria-Luisa Chiusano, Marie Jossé
16:15–16:20
FAIR (meta)data and semantic interoperability
16:20–16:30
|
EGU25-4319
|
On-site presentation
Paul Weerheim, Peter Thijsse, Dick Schaap, Tjerk Krijger, Alexandra Kokkinaki, and Enrico Boldrini

With Virtual Research Environments (VRE) and digital twins getting more and more common to support multidisciplinary Open Science, there is an ever growing need for the clear discovery and accessibility of data from different domains, FAIR for humans and machines. However, FAIR data and services are not yet the standard. Each domain, and within the domain the different research and data infrastructures, have different API’s, different metadata models, and semantics in place. In order to support multidisciplinary case studies, we need to succeed to bridge the gaps between these different domain-specific (meta)data standards and provide the scientists with a harmonised way of finding, accessing and processing this varying (meta)data.  This can be done in several ways:

  • By creating a common metadata profile, suitable for machine-to-machine communication, to publish (bottom up!) the information about the data access service and the metadata of the datasets, taking also into account domain specific semantics (e.g. to describe parameters, units, etc). Such practices and standards should be tested and afterwards promoted widely, e.g. supported by EOSC. 
  • Since the above profile is not yet implemented, except parts in certain subdomains, syntactic and semantic brokers need to be deployed to harmonise the various models into a central system with its own universal target model.
  • By bringing the various metadata, via brokering, into one place this offers the opportunity to validate and check the metadata for completeness and suitability. Reports can be created to feed back FAIRness levels and improvements to the data publishers.

In this session, several promising activities will be presented related to work done in the Blue-Cloud2026 (marine) and FAIR-EASE (multi-disciplinary) projects, with all activities focusing on use cases requiring data access in a VRE and being linked to EOSC. The activities include:

  • Set-up of a FAIR-EASE RDF Metadata model (DCAT-FE) to describe multi- disciplinary (meta)data uniformly, aiming to bridge the gaps between domain-specific (meta)data standards. 
  • Development of an (Interdisciplinary) Data Discovery and Access Service ((I)DDAS) based on CNR’s Data Access Broker (DAB) that maps the harvested metadata into an ISO19139 target model, including domain specific vocabularies in order to provide harmonized discovery and access. 
  • Reports and semantic analyser: Using the DAB as a broker the various metadata models and their content can be analysed for completeness and existence of semantics via a built in reporting service. Within FAIR-EASE NOC-BODC has developed a semantic analyser that checks for the existence of vocabulary terms, even if not expressed as such in the metadata.

The above work has much overlap with thematics at EOSC level where there are similar challenges in providing data access to the large variety of datasets in the data infrastructures. Both the Blue-Cloud2026 and FAIR-EASE teams are involved in EOSC Task Forces, Opportunity Area working groups and RDA to further promote results and support EOSC in solving this challenge.

How to cite: Weerheim, P., Thijsse, P., Schaap, D., Krijger, T., Kokkinaki, A., and Boldrini, E.: Bridging metadata gaps for FAIR multidisciplinary data access in Virtual Research Environments - Insights from Blue-Cloud2026 and FAIR-EASE, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-4319, https://doi.org/10.5194/egusphere-egu25-4319, 2025.

16:30–16:40
|
EGU25-2821
|
On-site presentation
Anette Ganske, Markus Stocker, and Angelina Kraft

Earth System Science (ESS) encompasses scientists from different scientific disciplines who use a multitude of heterogeneous terms to describe processes and data. This volume of often ambiguous, duplicate, inconsistent terms presents numerous challenges regarding interoperability and standardisation, e.g. for automated data selection or searching for data in a repository. Terminologies such as ontologies, thesauri and controlled vocabularies can enable scientists and infrastructure providers to realise a machine-processable expression of the information contained in their research data and other scholarly outputs. However, selecting the most appropriate ontology is often difficult and requires support for data producers and curators.

The  BITS1 project is building a Terminology Service (TS) for Earth System Sciences (ESS TS) as part of the existing Terminology Service2 (TIB TS) of the TIB – Leibniz Information Centre for Science and Technology3. It has implemented the ESS collection4 within the TIB TS, which already contains 40 terminologies that are relevant for the ESS and to which further relevant terminologies will be added. All terminologies in the TIB TS are quality controlled. New terminologies for the ESS collection can be suggested at any time via the ESS homepage. New terms for terminologies hosted on Github can also be suggested and forwarded to the developers of that terminology.

One possible use case for the ESS TS is to support the annotation of data with terms from terminologies. A major challenge in this case is the breaking of annotations: this can happen if the term of a terminology used for the annotation is deleted - e.g. in a subsequent version of a terminology. Therefore, BITS is conducting a feasibility study: can we assign persistent handles to all classes and individuals of each future version of a terminology in the  ESS TS collection, so that the handles redirect to a landing page of the respective terms? These handles could then be used in annotations and the TIB will ensure that they are persistent.

The integration of the ESS TS into the two different data repositories of the German Climate Computing Centre (DKRZ) and the Senckenberg - Leibniz Institute for Biodiversity and Earth System Research (SGN) is another task of BITS. The experience gained will be used to develop blueprints for connecting other ESS repositories to the TS. We also work closely with NFDI4Earth and the wider ESS community, and with the BASE4NFDI basic service TS4NFDI. Feedback from the wider ESS community on their expectations and needs for such a service is welcome and necessary for the project. Our goal is a terminology service that serves as a valuable resource for researchers, students, professionals and developers in ESS, providing them with accurate and consistent terminologies to enhance their work, improve communication and data sharing, and advance knowledge in their respective fields.

1: https://projects.tib.eu/bits/home 

2: https://terminology.tib.eu/ts 

3: https://www.tib.eu/en/

4:  https://terminology.nfdi4earth.de 

How to cite: Ganske, A., Stocker, M., and Kraft, A.: Establishing a Terminology Service for the Earth System Sciences, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-2821, https://doi.org/10.5194/egusphere-egu25-2821, 2025.

16:40–16:50
|
EGU25-14957
|
ECS
|
On-site presentation
Moazzam Shah Bukhari Syed, Paula Kelly, Paul Stacey, and Damon Berry

As the current state-of-the-art, ontologies have supported the need for semantic interoperability and data reusability by enabling consistent descriptions of information in the Earth Sciences domain. However, ontologies alone cannot enable semantic interoperability (or data reusability) in the Earth Sciences domain, particularly in the complex modelling of dynamic environments. A challenge lies in integrating heterogeneous Earth Observation data, where high semantic standards, scalability, and flexibility are essential to handle the continuously expanding (and ever-changing) data volume, complex descriptions, and relations of the multifaceted real-world entities, and improve their usability across platforms. Ontologies typically represent comparatively static knowledge which affects their utility in situations like these where modelling dynamic, ever-evolving processes is required on an ongoing basis. They also, at times, lack the necessary descriptive constructs for documenting complex, real-life applications, affecting data interoperability and reusability. In information systems, particularly those in the Earth Sciences, the documentation aspect of defining the concepts, relationships, and terminologies for real-world entities in a dataset is integral to enable semantic interoperability and data reusability. In such continuously evolving environments, Archetypes metadata constructs that can act as high-quality and rigorous documentation templates, exhibit their potential to work together with ontologies to address the challenge of semantic interoperability and maximise the reuse potential of Earth science data. In the archetype-based two-level information models that have been developed over the past twenty years in e-health, the reference model is used to define the core structure and relationships of data while the archetype model is used to provide the domain-specific details separately. A consistent reference model ensures a standardised data format which reduces inconsistencies in data. The archetype model allows definitions that are tailored for a specific domain where archetypes are designed to represent all the necessary information within that domain. The archetypes ensure that all required data fields are collected consistently as defined in the archetype model, ensuring uniform data collection across the domain. The archetypes further help with metadata management by defining data elements and attributes needed for each domain. Within their structure, archetypes also allow the binding of ontologies to provide classification (and associated meaning) of the entities. Over two decades of experience with the use of two-level models for e-health documentation has shown that combining ontologies in an archetype-based two-level information model helps create well-structured and meaningful data and associated information systems. The archetype-based two-level information model approach ensures consistency, enhanced data integration, reusability, and semantic interoperability. This approach, if explored and applied on a domain-wide level, has the potential to make Earth information systems flexible, scalable, more reliable, and efficient in supporting decision-making surrounding environmental sustainability.

How to cite: Syed, M. S. B., Kelly, P., Stacey, P., and Berry, D.: Adapting a Key Semantic Interoperability Innovation from e-Health to Earth Informatics: Are Two-Level Information Models Relevant?, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-14957, https://doi.org/10.5194/egusphere-egu25-14957, 2025.

16:50–17:00
|
EGU25-1584
|
On-site presentation
Eileen Hertwig, Andrea Lammert, and Andrej Fast

The World Data Center for Climate (WDCC) is a repository hosted by the German Climate Computing Center (DKRZ) in Hamburg, Germany. It provides access to and offers long-term archiving for data relevant for climate and Earth System research in a highly standardized manner following the FAIR principles. The repository is an accredited regular member of the World Data System (WDS) since 2003 and WDCC is certified as a Trustworthy Data Repository by CoreTrustSeal (https://www.coretrustseal.org).

WDCC services are aimed at both scientists who produce data (e.g. long-term archiving to fulfill the guidelines of good scientific practice) and scientists who re-use published data for new research. In Earth System Sciences often a big quantity of data is produced and needed for climate research. This is especially true for climate model output. To enable scientists to re-use data also across domains, it is essential that data is archived including rich metadata. Before data is published in WDCC, it undergoes multiple checks and curation. Recently WDCC has established its own standard for NetCDF file headers, so that only data fulfilling this standard are accepted for publication.

The WDCC minimal standard is based on CF metadata conventions, but goes beyond the requirements of most conventional CF checkers to ensure rich metadata in file headers. Requirements for the minimal standard have been published in the WDCC user guide to provide clear instructions for our users. A tool to check for WDCC minimal standard in NetCDF files has recently been released for internal use and is currently under development for public use as well. This will make it easier for data producers to check their data before submission to WDCC.

The overall goal of establishing the WDCC minimal standard is to keep and improve high metadata standards at WDCC, to ease the submission process as well as to improve re-usabilty of published data.

How to cite: Hertwig, E., Lammert, A., and Fast, A.: World Data Center for Climate – Repository for the Earth System Sciences, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1584, https://doi.org/10.5194/egusphere-egu25-1584, 2025.

17:00–17:10
|
EGU25-5788
|
On-site presentation
Marco Kulüke, Karsten Peters-von Gehlen, and Ivonne Anders

Climate science relies heavily on the effective creation, management, sharing, and analysis of massive and diverse datasets. As the digital landscape evolves, there is a growing need to establish a framework that ensures FAIRness in handling climate science digital objects. In particular, the machine-to-machine actionability of digital objects will be a crucial step towards future AI-assisted workflows. Illustrated by a use case, this contribution proposes adopting the Fair Digital Object (FDO) standard in synergy with the emerging InterPlanetary File System (IPFS) protocol to address the challenges associated with the interdisciplinary reuse of climate model simulation outputs and observational data.

FDOs are encapsulations of data and their metadata, made accessible via persistent identifiers, ensuring that data and its context remain a complete unit as the FDO travels through cyberspace and time. They represent a paradigm shift in data management, emphasizing machine-actionability principles of FAIRness and the requirements for enabling cross-disciplinary research. The FDO concept can be applied to various digital objects, including data, documents, and software across different research disciplines and industry areas.

IPFS is a peer-to-peer network protocol that enables decentralized file storage and sharing by assigning each file a unique content identifier. This system facilitates efficient, tamper-resistant storage across a distributed network, inherently supporting immutability and version control. Employing IPFS as the access layer for FDOs adds scalability, security, and redundancy to data management frameworks, while FDOs themselves contribute a semantically structured approach to defining, accessing, and linking digital objects.

This work presents a prototypical implementation and highlights the immediate benefits of the described concept when applied to manage data derived from the ORCESTRA (Organized Convection and EarthCARE Studies over the Tropical Atlantic) campaign. The campaign, conducted in August and September 2024, involved gathering data from multiple measurement platforms, including satellite observations, airborne instruments, ground-based systems, and climate model data. From a data management perspective, this multi-sensor campaign offers a valuable opportunity to test and refine concepts for handling large, heterogeneous datasets. As part of this work, selected datasets from the campaign were ingested and transferred via IPFS and included in a public catalog adhering to the FDO standard.

In conclusion, IPFS and FDOs establish a decentralized, verifiable, and interoperable ecosystem for digital objects, effectively addressing the requirements for interdisciplinary scientific data sharing and management. Together, these innovative concepts can significantly enhance the reproducibility of research workflows and strengthen the consolidation of scientific results in the societally and economically critical domain of weather and climate research.

How to cite: Kulüke, M., Peters-von Gehlen, K., and Anders, I.: Advancing Climate Data Use by Leveraging the Synergy of the Fair Digital Object Standard and the InterPlanetary File System Protocol, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5788, https://doi.org/10.5194/egusphere-egu25-5788, 2025.

Interdisciplinary services and tools
17:10–17:20
|
EGU25-21496
|
On-site presentation
Samuel Keuchkerian, Vincent Breton, David Sarramia, Marc Portier, Antoine Mahul, and Erwan Bodere

The FAIR-EASE Data Lake infrastructure is a pivotal development in Earth sciences, providing a deep depth in cloud approaches that significantly enhances the accessibility and utility of complex data for the earth science research community. At its core, the infrastructure integrates diverse data sources, including Copernicus, enabling comprehensive environmental and geophysical analyses, and several existing infrastructures. A key strength of the FAIR-EASE datalake lies in its sophisticated collaborative framework. It incorporates features from established environments like the European Grid Infrastructure (EG.euI), Galaxy.eu, D4Science, and the UCA test bed, along with several analytics tools that collectively enhance the infrastructure's operational and security capabilities. This setup will ensure high levels of interoperability and facilitate the usage of data and data sources across various scientific domains in the earth science domain. The integration among the five strategic pilot projects—Coastal Water Dynamics, Earth Critical Zones, Ocean Biogeochemical Observation, Marine Omics Observations, and Volcano Space Observatory—demonstrates the infrastructure's unique capability. These projects benefit from shared data access, whatever their format and access protocol and synergistic interactions among their data sources, allowing for innovative correlations, such as combining satellite data with biological data and in-situ data. This synergy provides new insights into biodiversity patterns or ecosystem health, showcasing the power of cross-disciplinary data integration. By providing discovery and access to diverse data sources and offering advanced analytical tools in a secure, collaborative environment, the FAIR-EASE Data Lake is pioneering new methodologies that transcend traditional disciplinary boundaries. It exemplifies the transformative potential of integrated data systems using distributed infrastructures in advancing our understanding of Earth’s dynamic systems. This has been done by identifying totechnical solutions tackling this distributed way of working used in other communities (such as Galaxy), identifying, integrating and deploying cloud data and data management tools (such as S3, Apache Iceberg). By developing tools enabling data discovery namely IDDAS for Interdisciplinary Data Discovery Access Service) and libraries namely UDAL for Uniform Data Access Layer, in combination with the proposal to use and include in users practices Amazon S3 API for data access, FAIR-EASE datalake has given the opportunity to include cloud technologies in pilots practices and to take advantages of distributed data resources in a very transparent way. In conclusion, the FAIR-EASE Data Lake infrastructure sets new standards for data-driven research and data-analytics in Earth sciences. By merging extensive data access with sophisticated computational resources and a robust collaborative framework, it empowers researchers to expand the frontiers of knowledge about Earth systems and their complex interactions.

 

 

 

How to cite: Keuchkerian, S., Breton, V., Sarramia, D., Portier, M., Mahul, A., and Bodere, E.: Enhancing Earth Science Research through the FAIR-EASE Data Lake Infrastructure: Integrating Diverse Data Sources for Advanced Computational Services, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-21496, https://doi.org/10.5194/egusphere-egu25-21496, 2025.

17:20–17:30
|
EGU25-1300
|
On-site presentation
Adnane Labbaci, Karen Barton, and Jieun Lee

Climate change significantly impacts land degradation, posing substantial threats to ecosystems and human livelihoods. This paper proposes an outline to create a digital twin aiming to integrate various datasets for monitoring land degradation and supporting stakeholders. The Digital Twin of Land Degradation (DTLD) will integrates satellite and drone imagery, real-time data, IoT, and field collection data to establish a dynamic and real-time simulation of land degradation at the affected regions. This framework implementation can be used by different end-users like scientific community, private sector, NGO, public community, decision-makers and internation organizations for analysis, report, and support the effective of land management practices. By implementing a nested method, the interconnected areas and their associated physical processes contribute to a larger twinning of the entire region. This is achieved by collecting geospatial information from different sources at different scales and modeling this data in real-virtual time. Ultimately, the DTLD approach should proves invaluable in pinpointing vulnerable areas and informing targeted mitigation strategies. This study underscores the critical need for advanced monitoring tools in combating land degradation and highlights the potential of digital twins in environmental research.

How to cite: Labbaci, A., Barton, K., and Lee, J.: Evaluating the Land Degradation trends: A Digital Twin Approach for Enhanced Environmental Monitoring , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1300, https://doi.org/10.5194/egusphere-egu25-1300, 2025.

17:30–17:40
|
EGU25-4504
|
On-site presentation
Reiner Schlitzer and Sebastian Mieruch-Schnülle

webODV is the online version of the popular Ocean Data View (ODV) desktop software for analysis and visualization of marine and other environmental data used by more than 10,000 researchers worldwide. Like ODV, webODV provides an interactive graphical user interface and offers rich feature sets via context specific menus. While desktop ODV requires all datasets to reside on the end user machine, webODV works differently. All datasets as well as a special version of the ODV software reside and run on dedicated webODV servers. Users do not have to install any software or download the sometimes-bulky datasets. Instead, users simply connect to datasets using their web-browser. New browser tabs open for every opened dataset, each tab providing an “ODV-like” interactive user interface. Previous ODV users will find it very easy to work with webODV. Concise getting-started documents help guide new users.

Large volumes of important environmental datasets for all parts of the Earth System are accessible from webODV servers at https://explore.webodv.awi.de/ and https://webodv-egi-ace.cloud.ba.infn.it. This includes global TS- and BGC-Argo profile data, GLODAP carbon system data, SOCAT surface fCO2 data, MEOP marine mammals data, and the World Ocean Atlas 2023. In addition, we also provide global collections of historical meteorological as well as river-runoff data.

In this presentation we focus on new developments that greatly facilitate connecting these diverse, multidisciplinary datasets, linking, for instance, ocean interior data with satellite observations, numerical model output as well as with meteorological and river-runoff data. A number of use cases will be shown, including (1) using trusted reference data for quality control of new BGC-Argo float data, (2) calculating and displaying the difference between ocean section repeats, and (3) the analysis and display of model/data differences.

How to cite: Schlitzer, R. and Mieruch-Schnülle, S.: Linking multi-disciplinary data with webODV, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-4504, https://doi.org/10.5194/egusphere-egu25-4504, 2025.

17:40–17:50
|
EGU25-17496
|
On-site presentation
Simona Simoncelli, Claudia Fratianni, Paolo Oliveri, Paolo Frizzera, Charles Troupin, and Reiner Schlitzer

The Sea Observations Utility for Reprocessing, Calibration and Evaluation (SOURCE) is a software tool (DOI: 10.5281/zenodo.5008245) designed for web applications that permits to visualize and analyze in-situ observations, and at the same time, calibrate and validate ocean models in a selected sea region. Within the framework of the EU FAIR-EASE project (https://fairease.eu/), the SOURCE python code has been updated in GitHub repository and adapted in a Jupyter notebook for use in a cloud-based Coastal Water Dynamics Pilot focused in the Northern Adriatic Sea.

The workflow aims to provide the users with an interactive experience for the analysis of the coastal water dynamics through multiple data sources (in situ, remote sensed and model data). The deployment of SOURCE, together with DIVAnd and webODV tools on the Galaxy platform represents a significant advancement in enabling interactive workflows for the analysis of the coastal marine environment, where important processes, such as the evolution of plankton blooms or the transport and fate of nutrients, carbon, and contaminants take place and depend critically on many factors. 

Nowadays, in-situ observations and ocean models can be freely accessed from several marine data services together with the relative metadata information. FAIR data access services and input data allow the user to access and filter the data according to the level of quality required for the intended use. However, the in situ data might still contain some anomalous data (bad data flagged as good) or need a higher level of quality control to be fit for use. SOURCE uses moored temperature and salinity observations, ocean reanalysis distributed by the Copernicus Marine Service and climatologies computed with the DIVAnd tool (DOI: 10.5281/zenodo.1303229) within the same cloud-based environment.

SOURCE tool performs a re-processing of observations and model data, computes accurate model skill scores and provides output data for visualization and further analysis. Data reprocessing allows one to characterize temperature and salinity variability at each mooring site and continuously monitor the ocean state. The SOURCE output includes climatologies, trends, averages at different time scales and model skill scores at the mooring location, which could also be useful to visually inspect both model and mooring sensor performance.

The output files in NetCDF format can be visualized through web applications such as webODV and downloaded for further use. 

A demonstration of SOURCE capabilities will be presented.

How to cite: Simoncelli, S., Fratianni, C., Oliveri, P., Frizzera, P., Troupin, C., and Schlitzer, R.: Sea Observations Utility for Reprocessing, Calibration and Evaluation (SOURCE): software update for virtual environment workflows, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17496, https://doi.org/10.5194/egusphere-egu25-17496, 2025.

17:50–18:00
|
EGU25-16118
|
On-site presentation
Catherine Schmechtig, Marie Boichu, Thierry Carval, Delphine Dobler, Raphael Grandin, Theo Mathurin, Nicolas Pascal, Virginie Racapé, Catalina Reyes, Raphaelle Sauzede, and Reiner Schlitzer

Under the umbrella of the EOSC ecosystem, the FAIR-EASE project funded under HORIZON-INFRA-2021-EOSC-01-04 aims to facilitate access to interoperable data and services for earth and environmental multi-disciplinary use cases, demonstrating the capabilities to support open science (https://fairease.eu/). Based on three of its pilots more specifically: the Volcano Space Observatory pilot, the Ocean Biogeochemical Observations pilot and the Coastal Dynamic pilot, the FAIR-EASE partners would like to highlight both the synergy and the new emerging interdisciplinary collaborations and progresses that can be achieved in the framework of such a European project promoting FAIR principles. 

Indeed,

* The Volcano Space Observatory Pilot supports the implementation of innovative web services (notably here the open access VOLCPLUME web platform) displaying a broad range of satellite and ground-based data relevant to the characterization of volcanic gas and particle properties for the near real-time monitoring of volcanic activity and atmospheric hazards.

* The Ocean Biogeochemical (BGC) Observations aims to provide a common QA/QC (Quality Assessment /Quality Control) platform to the whole BGC community to enhance the BGC data quality and address fundamental scientific questions. 

* The webODV software, part of the Coastal Water Dynamic pilot tools, allows to display and superimpose very heterogeneous datasets (i.e. satellite surface data vs. in situ profiles data, climatology vs. in situ profiles data, observations vs. model simulations in general).

Taking as a starting point, the eruption of the Hunga Tonga-Hunga Ha’apai volcano on January 15, 2022, and the availability of various satellite observations of volcanic plumes and ocean surface properties together with in situ Argo (Argo is an international program that collects information from inside the ocean using a fleet of robotic instruments that drift with the ocean currents) floats measuring BGC variables such as the chlorophyll-a and suspended particles in the eruption area, FAIR-EASE partners aim to investigate the potential impacts of such a major stratospheric eruptions a record breaking eruption in the satellite era, on the marine ecosystem. Volcano and BGC community expertise as well as tools developed and pooled on Galaxy Europe platform (Galaxy is an open-source Virtual Research Environment) during the FAIR-EASE project support scientists in their investigation.

How to cite: Schmechtig, C., Boichu, M., Carval, T., Dobler, D., Grandin, R., Mathurin, T., Pascal, N., Racapé, V., Reyes, C., Sauzede, R., and Schlitzer, R.: Seizing the FAIR-EASE project interdisciplinary opportunity to investigate the Ocean Biogeochemical data in the vicinity of the 2022 record breaking Hunga Tonga Eruption, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16118, https://doi.org/10.5194/egusphere-egu25-16118, 2025.

Posters on site: Fri, 2 May, 10:45–12:30 | Hall X4

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 2 May, 08:30–12:30
Chairpersons: Maria-Luisa Chiusano, Alessandro Rizzo, Marie Jossé
FAIR (meta)data and semantic Interoperability
X4.145
|
EGU25-17005
Alexandra Kokkinaki, Gwenaelle Moncoiffe, Christelle Pierkot, and Guillaume Alviset

Data silos exist as a result of domain-specific semantic and syntactic legacy but they continue being created as an inevitable consequence of the exploratory and experimental nature of scientific investigation. New technologies are developed and adopted, new variables are measured, new terms and concepts are defined, new formats are required, and all this within typically fairly narrow and highly specialised fields of research. Data sharing technologies must adapt to this evolving environment. Flexibility and connectivity between neighbouring or overlapping fields of research is key. Bridging the semantic gaps and discrepancies to enable seamless discovery, sharing and exploitation of data is the challenge. 

Achieving cross-domain interoperability requires the establishment and harmonisation of both the syntax and semantics of datasets. The Semantic Analyser was developed to address the semantic challenge by scanning metadata records and data files to identify and analyse the semantics used for specific metadata elements, focusing on instruments, parameters, platforms, and keywords.

To determine whether the values for these metadata elements originated from semantic artefacts, we initially explored leveraging existing large semantic repositories, such as Earth Portal and BioPortal. These repositories offered extensive semantic artifacts, potentially reducing the effort required to match terms. However, this approach presented two significant challenges: (1) implementing a federated service for term matching against these repositories proved to be slow and inefficient, and (2) the large number of matched terms generated confusion among users, largely due to the difficulty of selecting appropriate vocabularies and ontologies for specific domains and targeted context. 

To overcome these obstacles, we decided to construct a dedicated knowledge base (KB) containing well-known vocabularies relevant to the datasets in focus. The KB was iteratively refined as new insights were gained, providing a streamlined and domain-specific solution for semantic harmonization and improving the usability and performance of the Semantic Analyser.

How to cite: Kokkinaki, A., Moncoiffe, G., Pierkot, C., and Alviset, G.: Building semantic bridges between multi-domain scientific data resources, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17005, https://doi.org/10.5194/egusphere-egu25-17005, 2025.

X4.146
|
EGU25-19337
Ivette Serral, Victoria Lush, Joan Masó, Lucy Bastin, Raul Palma, and Alejandro Villar

The Green Deal Data Space is born in the big data paradigm where many sensors produce constant streams of Earth observation data (remote sensing and in-situ). The traditional and manual organization of data in layers is no longer efficient as data is constantly evolving and combined in new ways. There is a need for a new organization and representation of the data. Data Spaces are intended to become the EC’s comprehensive solution to integrate data from different sources with the aim to generate and provide a more ready to use knowledge in support of the Green Deal priority actions on climate change, circular economy, pollution, biodiversity, and deforestation.

Semantics need to be moved from the layer level to the property level. The solution is not a comprehensive new data model, but a framework composed by a suite of ontologies implemented in line with best practices, reusing existing standard vocabularies, such as Essential Variables (EVs). These are used in Earth observation to define variables that correspond to high impact on the Earth systems and should be a priority for monitoring. EVs assume that there is a limited number of variables that are essential to characterize the state in a system without losing significant information on its past and future trends.

In AD4GD (Horizon Europe nº 101061001), we are describing EVs following the I-ADOPT ontology, starting with the Essential Biodiversity Variables, and extending the model to describe the concept of EV products, where products define spatial and temporal resolution constraints. I-ADOPT (Barbara Magagna et al, RDA) is an ontology framework designed to facilitate interoperability between existing variable description models across research domains. It provides a common set of core components and relations to define machine-interpretable variable descriptions that re-use FAIR (Findable, Accessible, Interoperable, Reusable) vocabulary terms.

I-ADOPT defines 5 classes (Variable, VariableSet, Property, Entity, and Constraint), and specifies several object properties. Thus, i-ADOPT enables the decomposition of complex observable properties into essential atomic parts represented through concepts in FAIR terminologies, serving as a common layer of abstraction to systematically align and extend concepts from different terminologies as needed.

EVs expressed under this new ontology schema are being translated as linked data to become available via the OGC RAINBOW Definition Server (https://defs.opengis.net/vocprez/). RAINBOW will ensure that Essential Variables and Products are assigned unique and persistent identifiers facilitating wider adoption and reuse.

How to cite: Serral, I., Lush, V., Masó, J., Bastin, L., Palma, R., and Villar, A.: Essential Variables as a semantic interoperability solution for the Green Deal Data Space, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19337, https://doi.org/10.5194/egusphere-egu25-19337, 2025.

X4.147
|
EGU25-16885
Enrico Boldrini, Roberto Roncella, Fabrizio Papeschi, Paolo Mazzetti, Alexandra Kokkinaki, Gwenaëlle Moncoiffé, Tjerk Krijger, Paul Weerheim, and Dick Schaap

The Discovery and Access Broker (DAB) technology is implemented and deployed in the context of several European Union research projects and international initiatives, such as the Global Earth Observation System of Systems (GEOSS) and WMO Hydrological Observing System (WHOS), to enable discovery and access amongst distributed geospatial data provider services, along the line of fair and open approaches. 

Lately the DAB has been employed in the context of two European Open Science Cloud (EOSC) EU-funded projects FAIR-EASE, and Blue-Cloud 2026, characterized by a strong synergy between them. In FAIR-EASE, a customized DAB instance enables harmonized semantic searches amongst 15 data sources: EuroArgo, the Copernicus Marine Environment Monitoring Service (CMEMS), EasyData, ELIXIR-ENA, EurOBIS, European Environment Agency SDI Catalog, EMODnet, ICOS (Data Portal and SOCAT), Joint Research Centre Data Catalog, SeaDataNet (open datasets and products), US NODC collections, VITO/Copernicus Global Land Services, and WEkEO, totaling 156,000 datasets available for search. 

These services publish data using different service interfaces and data models. The DAB can seamlessly connect to each of them and map results to the interface and data model required by the users, leveraging a flexible and extensible harmonized internal data model based on ISO 19115. 

The broker discovery services are accessible through multiple interfaces, such as OGC CSW, OpenSearch API, and SPARQL endpoint. In particular, the SPARQL endpoint interface follows a linked data approach that is further leveraged by the FAIR-EASE architecture. To realize the SPARQL endpoint, a mapping from ISO 19115 to the FAIR-EASE DCAT profile was implemented. Concept URIs found in the original metadata (e.g., linking parameters, stations, instruments, keywords from online vocabularies) can be easily represented as properties and relations of the mapped FAIR-EASE dataset, supporting further reasoning and harmonized semantic query. 

The connection with the Semantic Analyzer component provided by BODC has been explored and will be the subject of further research. This tool can analyze FAIR-EASE datasets by connecting to the DAB CSW interface, extracting a list of terms and scanning known online vocabularies and ontologies to identify possible concept URIs matches. These URIs can finally replace natural text occurring in the original metadata, enhancing overall semantics and quality. 

How to cite: Boldrini, E., Roncella, R., Papeschi, F., Mazzetti, P., Kokkinaki, A., Moncoiffé, G., Krijger, T., Weerheim, P., and Schaap, D.: A flexible open brokering framework supporting distributed semantic discovery , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16885, https://doi.org/10.5194/egusphere-egu25-16885, 2025.

X4.148
|
EGU25-10434
Sebastian Laboor, Tillmann Lübker, Joshua Hashemi, Antonie Haas, Andreas Walter, Patrick Willner, and Guido Grosse

Open-access thematic data portals play a critical role in advancing an open data culture by addressing knowledge gaps and reducing uncertainties in research. The Arctic Permafrost Geospatial Center (APGC) serves as a pivotal platform for providing high-quality geospatial data to the permafrost research community. By facilitating easy access to diverse data products, APGC supports multi-scale, interdisciplinary analyses that integrate field observations, remote sensing, and modeling efforts.
APGC’s primary objectives are to ensure (i) the delivery of data that is impactful, user-friendly, and scientifically valuable, and (ii) to streamline data discovery, visualization, and metadata exchange through its comprehensive data catalog, accessible at https://apgc.awi.de. This portal is well-suited to support research initiatives and fulfills requirements for publishing and visualizing data and metadata, a growing necessity in securing project funding.
The APGC catalog accommodates datasets of varying formats, spatial scales, temporal extents, and themes. Users can search datasets by geographic location—either through spatial keywords or interactive map selection—as well as by category, product type, project, tags, keywords, licensing, or data format. Download links provide direct access to repositories such as PANGAEA, ensuring seamless data acquisition.
Built on the open-source CKAN framework and utilizing the DCAT metadata standard, the catalog adheres to FAIR (Findable, Accessible, Interoperable, Reusable) data principles. Each dataset is accompanied by detailed metadata, a concise abstract, and a preview option, with metadata available in multiple formats such as RDF/XML, JSON, or Turtle.
Initially established through the ERC PETA-CARB and ESA GlobPermafrost projects, APGC now hosts over 360 curated datasets from various sources. These datasets encompass a wide range of permafrost-related themes, including surface and subsurface characteristics such as soil temperature, carbon content, ground ice, land cover, vegetation, periglacial landforms, and subsidence. Data formats range from vector and raster to time series and are derived from in-situ measurements, Earth observation, and modeling studies. WebGIS tools further enhances user engagement by enabling interactive exploration of most datasets (https://maps.awi.de & https://apgc-map.awi.de/).
APGC encourages data contributions from individual researchers and project consortia. Submitted datasets undergo evaluation based on criteria such as relevance to permafrost research, scientific significance, accessibility, quality, and metadata completeness. To ensure long-term preservation and accessibility, datasets must be archived in repositories like PANGAEA.
By integrating tools for data publication, visualization, and long-term preservation, APGC provides an essential service for research projects, enabling them to meet funding requirements while advancing the understanding of permafrost systems across the Arctic and beyond.

How to cite: Laboor, S., Lübker, T., Hashemi, J., Haas, A., Walter, A., Willner, P., and Grosse, G.: Fostering Open Science through the Arctic Permafrost Geospatial Center (APGC), EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-10434, https://doi.org/10.5194/egusphere-egu25-10434, 2025.

Interdisciplinary services and tools
X4.149
|
EGU25-12566
|
ECS
Enrico Baglione, Simona Simoncelli, Paolo Oliveri, Marjahn Finlayson, and Sissy Iona

The Mediterranean Sea is warming at a rate faster than the global ocean average, as recent research highlights. This region is particularly vulnerable to climate change due to its distinctive topography and thermohaline circulation patterns. Observational evidence and model-based analyses have revealed considerable shifts in the properties of Mediterranean water masses.

A crucial metric for tracking this phenomenon is the Ocean Heat Content (OHC). This study addresses the challenge of devising a cloud-based workflow to estimate OHC, enabling analysis of its trends across user-defined sub-regions and depth layers within the Mediterranean basin. Developed within the framework of the EU Blue Cloud 2026 project, this application integrates machine-to-machine access to various blue data infrastructures. 

The OHC indicator will be part of a generalized outcome made up of several Marine Environmental Indicators managed by a Virtual Research Environment (VRE) infrastructure that enables users to assess and monitor marine environmental conditions, offering crucial support for informed decision-making in ocean management. By integrating multiple data sources, the future platform will deliver a centralized data analysis service, enabling online computation of indicators through digital tools.

This case study focuses on the use of World Ocean Database temperature data but it can be easily adapted to other data sources such as SeaDataNet, EuroArgo and the Copernicus Marine Service.

The workflow employs the DIVAnd tool to interpolate historical in situ temperature data onto a regular grid, deriving sliding annual and seasonal decadal temperature fields. These fields will be validated with the World Ocean Atlas 2023 and the results will be compared against ocean reanalysis datasets provided by INGV and the Copernicus Marine Service. The primary goal is to uncover OHC trends: this will help to better understand their impact on the regional climate system. 

The anticipated findings will shed light on the spatial heterogeneity of warming trends across different sub-regions and depth layers, emphasizing the intricate relationship between climate change and hydrodynamic processes in shaping the thermal structure of the Mediterranean Sea.

Additionally, the workflow ensures that critical ocean variables are regularly updated and validated using the latest community standards. This advancement will enable rapid and reliable updates of OHC as a key indicator, fostering informed decision-making and efficient responses.

How to cite: Baglione, E., Simoncelli, S., Oliveri, P., Finlayson, M., and Iona, S.: Assessing Warming Trends in the Mediterranean Sea: A Workflow-Based Approach, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12566, https://doi.org/10.5194/egusphere-egu25-12566, 2025.

X4.150
|
EGU25-8950
|
Highlight
Marie Boichu, Raphael Grandin, Théo Mathurin, Nicolas Pascal, Christine Deroo, Colette Brogniez, Maximilien Patou, Sylvain Neut, Cédric Tétard, Jérôme Riedi, Luc Blarel, and Philippe Goloub

The open access VOLCPLUME web platform (https://volcplume.aeris-data.fr) is part of the Volcano Space Observatory portal under development within the framework of the Horizon Europe EOSC FAIR EASE project. This web interface aims at supporting the near-real-time monitoring of volcanic emissions and the multi-scale analysis of volcanic plumes in the atmosphere from local to global scales (Boichu and Mathurin, 2022).

To reach this goal, VOLCPLUME allows users to jointly analyse a broad set of satellite and ground-based active/passive remote sensing observations of both volcanic gas and particles, including Low Earth and Geostationary Orbit imagery, spaceborne and ground-based lidar, as well as photometric measurements. The platform also gives access to in-situ ground-level data from air quality monitoring networks. This synergy aims at facilitating the assessment of the multiscale impacts of volcanic plumes on atmospheric chemistry, air quality, aviation safety and climate.

The « SO2 Flux Calculator » (https://dataviz.icare.univ-lille.fr/so2-flux-calculator), a companion web application, also allows for automating the computation of daily SO2 gas flux emissions from Sentinel-5P/TROPOMI observations with a robust noise estimation (Grandin et al. 2024). Regarding volcano monitoring and initialisation of atmospheric models, such interactive tools allow for remotely tracking changes in the degassing or eruptive activities of any isolated or non-instrumented volcano.

For illustration, we present different case-studies including the eruptions of La Palma/Cumbre Vieja, Piton de La Fournaise, Soufrière Saint-Vincent and Hunga Tonga. 

 

Boichu, M. and Mathurin, T. (2022). VOLCPLUME, an interactive web portal for the multiscale analysis of volcanic plume physico-chemical properties [Interactive Web based Ressource], AERIS, DOI : 10.25326/362, Platform access: https://volcplume.aeris-data.fr, Homepage: https://www.icare.univ-lille.fr/volcplume/

Grandin, R., Boichu, M., Mathurin, T. and Pascal, N. (2024). Automatic estimation of daily volcanic sulfur dioxide gas flux from TROPOMI satellite observations: application to Etna and Piton de la Fournaise. J. Geophys. Res. https://doi.org/10.1029/2024JB029309

How to cite: Boichu, M., Grandin, R., Mathurin, T., Pascal, N., Deroo, C., Brogniez, C., Patou, M., Neut, S., Tétard, C., Riedi, J., Blarel, L., and Goloub, P.: VOLCPLUME, an interactive open access web platform for the multiscale monitoring of volcanic emissions and their impacts on the atmosphere, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-8950, https://doi.org/10.5194/egusphere-egu25-8950, 2025.

X4.151
|
EGU25-11410
Erwan Bodéré

Galaxy is an open-source Virtual Research Environment (VRE) that empowers researchers by enabling reproducible, collaborative, and efficient workflows across disciplines.Designed to align closely with the FAIR principles, Galaxy integrates diverse tools, scalable data storage solutions, and authentication mechanisms, creating an ecosystem that enhances the accessibility and utility of scientific research Complementing Galaxy, platforms like Zenodo play a role in ensuring the findability and reusability of research outputs, such as datasets and workflows, by providing persistent identifiers (DOIs) and rich metadata.

Galaxy enhances the findability of research artifacts by supporting Persistent Identifiers (PIDs) for tools and workflows, ensuring long-term discoverability. Its integration with RO-Crate facilitates the packaging and description of data and workflows, enabling cataloging and retrieval. Thus, Galaxy fosters reusability by prioritizing reproducibility and modularity. Researchers can share, clone, and adapt workflows enriched with metadata and version histories, enabling replication of results. Searchable repositories such as the Galaxy Toolshed (the appstore where all the tools are listed) further ensure discoverability, allowing researchers to access shared tools and workflows across domains. These capabilities help scientists locate resources efficiently. Furthermore, its open-source nature further encourages community-driven contributions, ensuring Galaxy’s continuous adaptation to evolving research needs.

Accessibility is an important aspect of Galaxy’s architecture. It provides free access to many publicly available instances and supports integration with scalable cloud and object storage systems like S3, ensuring reliable access to datasets regardless of size. Galaxy’s use of Pulsar, a lightweight distributed computing system, extends computational capabilities by enabling workflows to run across diverse remote resources. Security is enhanced through federated identity management, such as EGI Check-in, which facilitates secure access for users worldwide.

Interoperability is at the heart of Galaxy’s design, achieved through compliance with community-driven standards like CWL (Common Workflow Language) and the Galaxy API, ensuring integration with other platforms. Built-in tools like JupyterLab and remote desktop environments enhance the platform’s capacity to interact with external analytical environments and visualization frameworks. Tool containers leveraging Docker and Singularity ensure portability and reproducibility, allowing workflows to operate across heterogeneous infrastructures.

With a vibrant global community driving its development, Galaxy represents a cutting-edge platform that embodies the FAIR principles. By integrating interoperable tools, scalable storage, robust authentication systems, and collaborative frameworks, Galaxy supports researchers in creating, sharing, and reusing scientific knowledge. 

How to cite: Bodéré, E.: GALAXY as a Virtual Research Environment Advancing FAIR Principles, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-11410, https://doi.org/10.5194/egusphere-egu25-11410, 2025.

National Research Data Infrastructure
X4.152
|
EGU25-6087
Sebastien Payan, Frédéric Huynh, Anne Puissant, Jean-François Faure, Patrice Henry, Emmanuel chaljub, Erwann Quimpert, and Yvan Lebras

The consequences of global change on the Earth system are multiple such as increase in air temperature and sea level, stronger weather events, impacts on ecosystem biodiversity and natural hazards. But the detection of changes and impacts is still difficult because of the diversity and variability of the Earth environments (oceans, land surfaces, atmosphere, solid Earth). While there has been a clear increase in the number of environmental observations, whether by in situ, laboratory or remote sensing measurements, each data is both costly to acquire and unique. The number and variety of data acquisition techniques require efficient methods of improving data availability via interoperable portals, which facilitate data sharing according to FAIR principles for producers and users.

In this context, DATA-TERRA Research Infrastructure (RI) for Earth data, is the entry point to access all the French environmental observation data. As a digital infrastructure, DATA-TERRA works closely with Earth Observation research infrastructures and space agencies. It is backed by a continuum of distributed and interconnected platforms, proposing services that span the full data cycle from access to value-added processing, thus enabling the exploitation of large volumes of data - notably satellite data- and the generation of information through advanced on-demand and systematic processing services. At national, European and international levels, it is advancing the development of open science, implementation of FAIR* approaches, contributing to space missions and applications and to the initiative to generate digital twins of the Earth.

The objective of this talk is to present the DATA-TERRA strategy atmosphere data and products from space to Erath, in the domain of marine sciences (ODATIS Data hub), continental surfaces sciences (THEIA), atmospheric sciences (AERIS), solid Earth science (FormaTerre), and ecological science (PNDB). DATA-TERRA relies on several scientific consortia in order to promote and develop innovative processing methods and products with a focus on success interdisciplinary achievements.

How to cite: Payan, S., Huynh, F., Puissant, A., Faure, J.-F., Henry, P., chaljub, E., Quimpert, E., and Lebras, Y.: The DATA-TERRA Research Infrastructure : from data to services for integrated use of environmental data under the scope of success interdisciplinary achievements , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6087, https://doi.org/10.5194/egusphere-egu25-6087, 2025.

X4.153
|
EGU25-16948
Jean-François Faure, Jean-Philippe Mallet, Elisabeth Pointal, Stéphane Debard, and Michea David

The French National Institutional Facility for Shared Access to Satellite Imagery (DINAMIS) is a cross-functional component of the National Research Infrastructure DATA-TERRA. It was created in 2018 under the impetus of several funding French organizations (CNES, CNRS, IGN, INRAE, IRD, CIRAD). Its mission is to organize and operate the procurement and the access to both Very-High resolution satellite data (SPOT6-7, Pléiades, among other) and to advanced products derived from Earth observation resources.

To meet these objectives DINAMIS ensures the supply and dissemination of very high resolution datasets meeting the needs (a) of the French scientific communities of Land surfaces (THEIA), Ocean (ODATIS), Solid Earth (FormaTerre), Atmosphere (AERIS) and Biodiversity (PNDB), (b) of French Governmental Organizations, NGOs invested of a public mission, and some private companies involved in R&D projects. In complement, DINAMIS is opened to international collaborations with possible limited access for any scientists belonging to public research laboratories after membership certification. DINAMIS further applies an Open data policy to all Spot 6-7 orthorectified datasets over France (with more than 10 annual coverages of the territory at 1.5 m resolution).

DINAMIS acquires and disseminates to its certified users Very High-Resolution Imagery from past satellites (Spot 1-5, through the SWH/Spot World Heritage repository) and to active satellites (Spot 6-7, Pléiades, Pleiades Neo); it will further be plugged in 2025/26 to up-coming programs (PWH/Pléiades World Heritage; CO3D). All products are pooled in a unified Catalog. The Catalog is regularly updated with a gross 1 Million square kilometers per year with annual Spot 6 national coverages, results from users tasking or Archive requests of Pléiades and Spot 6-7 in response to specific needs. All products are associated to a tailor-made License enabling all certified users to access the datasets.

In complement to the satellite data access, DINAMIS is interfaced to on-demand processing services operated by the Thematic Datahubs of DATA-TERRA. For instance, DINAMIS provides seamless access to the DSM-OPT webservice of FormaTerre allowing the calculation of Digital Surface Models, true ortho-images and ortho-mosaics from Pléiades and Spot 6-7 stereo/tri-stereo images. This services hosts currently ca. 200 users with 15 to 20 on-line processing per week. All generated surface models are distributed with a CC-BY-NC license through the FormaTerre and THEIA catalogs, allowing to promote open science principles.

How to cite: Faure, J.-F., Mallet, J.-P., Pointal, E., Debard, S., and David, M.: DATA-TERRA -DINAMIS/ForMaTerre: a Very High Resolution Satellite Data Facility for sharing access to optical Earth Observation resources and to on-line processing services for Digital Surface Models creation, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16948, https://doi.org/10.5194/egusphere-egu25-16948, 2025.

X4.154
|
EGU25-5136
|
ECS
Christin Henzen, Anna Brauer, Jonas Grieb, Ralf Klammer, Markus Konkol, Roland Koppe, Kemeng Liu, Johannes Munke, Tom Niers, Daniel Nüst, Tim Schäfer, and Alexander Wellmann

In Earth System Sciences, the landscape of resources for research data management, ranging from specific datasets to RDM services, is very diverse to tackle the heterogeneous and domain-specific (sub) community needs. Although marketplaces for services and central access points for specific data types exist, researchers need assistance discovering resources for research data management tasks without having to navigate multiple, differently designed and not integrated platforms.

By engaging with the communities, NFDI4Earth aims to address the needs and requirements of the end users and includes the following objectives:

  • map the existing landscape of sustainable services and reuse them as data sources or user interface components;
  • align strategies across infrastructure providers/across NFDI4Earth community members to curate provided information for data sources with the providers and share compute resources;
  • develop metadata harvesters or required adaptions together with the service provider's developers, and share the source code with an open license; and
  • reuse open standards and specifications for all (service) interfaces and share own specifications openly.

We apply this strategy for the implementation of our two central support services.

  • The Knowledge Hub (https://knowledgehub.nfdi4earth.de) is a knowledge graph database and acts as an aggregator. It harvests curated information for relevant resources from various sources and makes it available as semantically enabled open data. The solution builds on an open-source data management system and a triple store. Fostering the community effort, we encourage and investigate whenever possible the implementation across Knowledge Hub and data source developers, such as with the Helmholtz Data Hub Earth and Environment. By actively collaborating with the original sources on the curation of the harvested information, e.g., with re3data for repository information, we ensure the distribution of high-quality information via the data sources and the NFDI4Earth Knowledge Hub.
  • The OneStop4All (https://onestop4all.nfdi4earth.de) and the integrated Living Handbook facilitate the discovery of ESS-relevant RDM resources, such as datasets or software, and act as a resource catalog based on the Knowledge Hub data. Through the publication, curation, and integration of related information, such as best practices or showcase articles for published resources, the OneStop4All also acts as a community portal. This approach is implemented by managing Living Handbook articles in an open repository, editing them in an open review process, harvesting them through the Knowledge Hub, and making them visible in a user-friendly presentation. In addition, the OneStop4All offers several search functionalities providing benefits for differently skilled users including RDM novices and experts. In the following steps, we will integrate existing frontend services, e.g., Earth Data Portal map viewers, to facilitate researchers using a single entry point for their RDM tasks.

Our FAIR-by-design services will be iteratively developed with an open and distributed developer team focussing on data access to data publications, with an eye on using the content for advanced data analytics use cases.

How to cite: Henzen, C., Brauer, A., Grieb, J., Klammer, R., Konkol, M., Koppe, R., Liu, K., Munke, J., Niers, T., Nüst, D., Schäfer, T., and Wellmann, A.: Developing Central Support Services for the German National Research Data Infrastructure in Earth System Sciences through a Community-Driven Effort, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5136, https://doi.org/10.5194/egusphere-egu25-5136, 2025.

Posters virtual: Fri, 2 May, 14:00–15:45 | vPoster spot 4

The posters scheduled for virtual presentation are visible in Gather.Town. Attendees are asked to meet the authors during the scheduled attendance time for live video chats. If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access Gather.Town appears just before the time block starts. Onsite attendees can also visit the virtual poster sessions at the vPoster spots (equal to PICO spots).
Display time: Fri, 2 May, 08:30–18:00
Chairpersons: Davide Faranda, Valerio Lembo

EGU25-4994 | ECS | Posters virtual | VPS20

bibliometric analysis of natural lakes and paleolakes origin of natural events 

Jamal Abbach, Said El Moussaoui, Hajar El Talibi, and Charaf Eddine Bouiss
Fri, 02 May, 14:00–15:45 (CEST) | vP4.1

This study explores studies on lakes and paleolakes originating from natural effects. The main objective is to perform a bibliometric analysis of research on naturally occurring lake environments worldwide, covering the period from 2014 to 2024. Data extracted from 1687 documents in the Scopus database were analyzed using VOSviewer software. The results reveal a strict trend towards a focus on geosciences and the environment, underlined by research. This study particularly highlights the relationships between authors, co-authors, keywords, and publishers of specialized journals in this research field, thus providing essential information to guide future research and to value the role of these geological environments, which are rare in the world, based on essentially multidisciplinary geoscience approaches.

How to cite: Abbach, J., El Moussaoui, S., El Talibi, H., and Bouiss, C. E.: bibliometric analysis of natural lakes and paleolakes origin of natural events, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-4994, https://doi.org/10.5194/egusphere-egu25-4994, 2025.

EGU25-19225 | Posters virtual | VPS20

Customizing Trends.Earth for land degradation assessment in the earth critical zone: a FAIR-EASE approach 

Italia Elisa Mauriello, Giuliano Langella, Fabio Terribile, and Marco Miralto
Fri, 02 May, 14:00–15:45 (CEST) | vP4.2

Land degradation is a critical challenge to sustainable development, impacting ecosystems, economies, and communities globally. As part of the FAIR-EASE Earth Critical Zone (ECZ) pilot, this study develops a tailored Land Degradation Assessment tool based on the Trends.Earth approach. The tool aims to enhance data accessibility, integration, and usability across environmental domains, supporting decision-making and policy frameworks aligned with the United Nations Sustainable Development Goals (SDGs).
Building upon the robust Trends.Earth implementation, we can integrate customized workflows and datasets to reflect regional variability in degradation indicators, including vegetation productivity, soil health, and land cover changes. Our approach prioritizes FAIR (Findable, Accessible, Interoperable, and Reusable) principles to ensure broad usability and collaboration across scientific and policy communities.
Preliminary results demonstrate the tool's capacity to enhace the detail of the analysis and to identify degradation hotspots. Furthermore, the integration of open-source geospatial tools and standards supports a scalable framework applicable to diverse environmental contexts.
The tool is designed to be embedded within the LandSupport platform, a geospatial decision support system, further enhancing its accessibility and integration into decision-making processes for land management.
This work contributes to advancing interdomain digital services and illustrates the potential of FAIR principles in addressing complex environmental challenges. We invite feedback from the community to refine, expand and customise the tool's application, fostering collaboration for sustainable land management.

How to cite: Mauriello, I. E., Langella, G., Terribile, F., and Miralto, M.: Customizing Trends.Earth for land degradation assessment in the earth critical zone: a FAIR-EASE approach, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19225, https://doi.org/10.5194/egusphere-egu25-19225, 2025.

EGU25-15569 | Posters virtual | VPS20

Uniform Data Access Layer: Advancing Data FAIRness in FAIR-EASE 

Jorge Mendes and Marc Portier
Fri, 02 May, 14:00–15:45 (CEST) | vP4.3

The Uniform Data Access Layer (UDAL), a central component within the FAIR-EASE project, is designed to revolutionize how researchers access, integrate, and utilize diverse scientific datasets. FAIR-EASE prioritizes FAIR (Findable, Accessible, Interoperable, Reusable) principles to ensure that data becomes a powerful enabler of scientific discovery and informed decision-making. 

The UDAL concept brings a modular and re-usable approach to choosing and using data in data processing workflows. It materializes as a software package that users can use in their pipelines. UDAL serves as a middleware layer, offering a standardized, user-centric framework for data access. By bridging the gap between complex infrastructures and researchers, UDAL simplifies data retrieval, integration, and usage. This solution decouples data usage from technical complexities, ensuring that researchers can focus on analysis without needing detailed knowledge of access protocols or data formats. Its adaptability to a wide range of technologies and protocols enables interoperability across disciplines and geographic regions. UDAL's innovative approach has been validated with data providers such as Argo and Blue-Cloud and various technology stacks and formats like NetCDF, Beacon, SPARQL endpoint, HTTP REST API, demonstrating its capacity to unify diverse datasets into a single, intuitive system. 

A key feature of UDAL is its "named query" mechanism, which standardizes and reuses specific data requests. This enhances reproducibility, shields users from the intricacies of data filtering and retrieval, and promotes efficiency. Additionally, UDAL’s technology-agnostic approach accommodates centralized and distributed data architectures, supporting innovation in data management and usage strategies. 

By addressing critical challenges in data management—such as technical barriers and the diversity of data sources—UDAL aligns with the broader goals of FAIR-EASE. It empowers both researchers and data providers, fostering cross-domain collaboration and innovation. Beyond its technical contributions, UDAL embodies a vision of “data as a commodity,” promoting the sustainability and accessibility necessary for open science. While it does not directly address equitable benefit distribution, its transparent usage measurement capabilities lay a foundation for future policy and governance frameworks. 

In conclusion, UDAL represents a transformative advance in data-driven research, harmonizing access across disciplines and platforms while accelerating discovery and fostering innovation. As a cornerstone of FAIR-EASE, UDAL is set to establish new standards for simplicity, usability, and sustainability in scientific data management. 

How to cite: Mendes, J. and Portier, M.: Uniform Data Access Layer: Advancing Data FAIRness in FAIR-EASE, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-15569, https://doi.org/10.5194/egusphere-egu25-15569, 2025.