ESSI3.5 | Advancing Research and Knowledge in Earth and Environmental Science Through Data Integration and Interoperability – Concepts and Applications from Multi-disciplinary Groups
EDI
Advancing Research and Knowledge in Earth and Environmental Science Through Data Integration and Interoperability – Concepts and Applications from Multi-disciplinary Groups
Co-organized by GI2
Convener: Angeliki AdamakiECSECS | Co-conveners: Fabio FeriozziECSECS, Federica Tanlongo, Jacco Konijn, Anca Hienola, Marta Gutierrez, Magdalena BrusECSECS
Orals
| Wed, 17 Apr, 14:00–18:00 (CEST)
 
Room -2.16
Posters on site
| Attendance Wed, 17 Apr, 10:45–12:30 (CEST) | Display Wed, 17 Apr, 08:30–12:30
 
Hall X4
Posters virtual
| Attendance Wed, 17 Apr, 14:00–15:45 (CEST) | Display Wed, 17 Apr, 08:30–18:00
 
vHall X4
Orals |
Wed, 14:00
Wed, 10:45
Wed, 14:00
In the Environmental and Solid Earth research fields, addressing complex scientific and societal challenges with holistic solutions within the dynamic landscape of data-driven science underscores the critical need for data standardisation, integration and interoperability. Just as humans communicate effectively to share insights, machines must seamlessly exchange data. The high-capacity computing services allow for the discovery and processing of large amounts of information, boosting the integration of data from different scientific domains and allowing environmental and solid Earth research to thrive on interdisciplinary collaboration and on the potential of big data.
As earth and environmental researchers, our expertise is essential in addressing natural and ecological problems, which extends to our engagement with operational infrastructures (the Environmental Research Infrastructures-ENVRIs, the European Open Science Cloud-EOSC, the EGI Federation, among others). Data repositories, e-service providers and other research or e-infrastructures support scientific development with interoperability frameworks and technical solutions, to effectively bridge the traditional boundaries between the disciplines, and enhance machine-to-machine (M2M) interactions, enabling data and service interoperation. 
Join this session to explore real-world examples from earth and environmental scientists (from atmosphere, marine, ecosystems or solid earth), data product developers, data scientists and engineers. Whether you've navigated infrastructures, addressed data analytics, visualisation and access challenges, or embraced the transformative potential of digital twins. Whether you've gained expertise in data collection, quality control and processing, employed infrastructures to expedite your research, or participated in Virtual Access and/or Transnational Access programs to expand your horizons. We invite researchers with diverse expertise in data-driven research to showcase impactful scientific use cases and discuss interdisciplinary methodologies or propose best practices with successful interoperability frameworks. Join us as we explore ways to enhance the FAIRness of earth and environmental data, fostering open science within and beyond our fields. 

Orals: Wed, 17 Apr | Room -2.16

Chairpersons: Angeliki Adamaki, Anca Hienola, Magdalena Brus
14:00–14:05
14:05–14:15
|
EGU24-3441
|
solicited
|
On-site presentation
Anne Fouilloux

In this presentation, we will share firsthand experiences and insights gained from navigating the EOSC (European Open Science Cloud), offering a glimpse into how EOSC influences our day-to-day work and how it has become an invaluable ally for our team. We belong to the Nordic e-Infrastructure Collaboration on Earth System Modeling Tools (NICEST), a small community composed of researchers, Research Software Engineers (RSEs), and engineers from Norway, Sweden, Finland, Denmark, and Estonia, working in different organisations such as national meteorological services, national compute/storage infrastructure providers and support services, universities and other research institutes either working directly on climate or supporting related activities. The NICEST community strengthens the Nordic position in climate modelling by addressing e-infra challenges, leveraging Earth System Models (ESMs) to understand climate processes, adapt to global change, and mitigate impacts.

Our presentation extends beyond the technical aspects, offering a narrative of collaborative discovery that illustrates how EOSC has transformed into an indispensable companion, enabling our team to embody the principles of FAIR (Findable, Accessible, Interoperable, and Reusable) and open science. Throughout the session, we will highlight the operational intricacies of frameworks like EOSC, emphasising our nuanced approach to leveraging these frameworks for maximum impact.

This personal narrative is not just about success stories; it explores the challenges we've faced and the lessons we've learned. We place a special emphasis on our evolving understanding of effectively exploiting specific EOSC services, transforming it into more than just infrastructure but a trusted friend in our professional lives.

As we reflect on our collaborative journey, we'll share stories of triumphs, challenges, and the unique bond that has developed between our team and those contributing to EOSC's development. We'll explain how we've moved from being mere users to active contributors, contributing to the deployment of our own services to serve our community. Today, our aim is to actively participate in the construction of EOSC, demonstrating that through collaboration and co-design, we can significantly contribute to its ongoing evolution. This collaboration becomes truly effective when each actor recognizes the value of others, enabling us to pool efforts and enhance efficiency together.



How to cite: Fouilloux, A.: How EOSC became our best ally?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3441, https://doi.org/10.5194/egusphere-egu24-3441, 2024.

14:15–14:25
|
EGU24-8465
|
On-site presentation
Ulrich Bundke, Daniele Bailo, Thierry Carval, Luca Cervone, Dario De Nart, Claudio Dema, Tiziana Ferrari, Andreas Petzold, Peter Thijsse, Alex Vermeulen, and Zhiming Zhao

Easy and fast access to reliable, long-term, and high-quality environmental data is fundamental for advancing our scientific understanding of the Earth system, including its complex feedback mechanisms, as well as for developing mitigation and adaptation strategies, for fact-based decision-making, and for the development of environment-friendly innovations. In response to the continuously growing demand for environmental scientific knowledge, the ESFRI-listed environmental research infrastructures (ENVRIs/RIs) in Europe have formed a strong community of principal producers and providers of environmental research data and services from the four subdomains of the Earth system (Atmosphere, Marine, Solid Earth and Biodiversity/Ecosystems) through the cluster projects ENVRI (2011-2014), ENVRIplus (2015-2019), and ENVRI-FAIR (2019-2023). The further integration of ENVRIs across the subdomains is considered critical for leveraging the full potential of the ENVRI cluster for integrated environmental research. This step will be taken by ENVRI-Hub NEXT.

To transform the challenging task of integrated Earth observation into a concept towards a global climate observation system, the World Meteorological Organisation (WMO) has specified a set of Essential Climate Variables (ECV) relevant for the continuous monitoring of the state of the climate. ECV datasets provide the empirical evidence needed to understand and predict the evolution of climate, guide mitigation and adaptation measures, assess risks, enable attribution of climatic events to the underlying causes, and underpin climate services. ENVRIs are critical for monitoring and understanding changes in ECVs, as has been identified by the ESFRI Strategy Working Group on Environment in their recent Landscape Analysis of the Environment Domain.

The recently finished cluster project ENVRI-FAIR has launched an open access hub for interdisciplinary environmental research assets utilising the European Open Science Cloud (EOSC). The ENVRI-Hub is designed as a federated system to harmonise subdomain- or RI-specific access platforms and offers a user-centered platform that simplifies the complexity and diversity of the ENVRI landscape while preserving the structure of the individual RIs needed to fulfil the requirements of their designated communities. Building on the ENVRI-Hub, ENVRI-Hub NEXT aims at creating a robust conceptual and technical framework that will empower the ENVRI Science Cluster to provide interdisciplinary services that enable cross-RI exploitation of data, guided by the science-based framework of ECVs.

This presentation will summarise the status of the ENVRI-HUB and the plans for ENVRI HUB-NEXT.

Acknowledgement:

ENVRI-HUB-NEXT has received funding from the European Union’s Horizon Europe Framework Programme under grant agreement No 101131141.

ENVRI-FAIR has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824068 101131141.

This work is only possible with the collaboration of the ENVRI-HUB-NEXT partners and thanks to the joint efforts of the whole ENVRI-Hub team.

How to cite: Bundke, U., Bailo, D., Carval, T., Cervone, L., De Nart, D., Dema, C., Ferrari, T., Petzold, A., Thijsse, P., Vermeulen, A., and Zhao, Z.: ENVRI-Hub-NEXT, the open-access platform of the environmental sciences community in Europe, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8465, https://doi.org/10.5194/egusphere-egu24-8465, 2024.

14:25–14:35
|
EGU24-5989
|
On-site presentation
Daniele Bailo, Rossana Paciello, Helen Glaves, Jean-Baptiste Roquencourt, Jakob Molander, Alessandro Spinuso, Tor Langeland, Jan Michalek, Otto Lange, Agata Sangianantoni, Carine Bruyninx, and Carmela Freda and the EPOS Group

Established as a European Research Infrastructure Consortium (ERIC)  in 2018, the European Plate Observing System (EPOS) Research Infrastructure represents a significant advancement in solid Earth sciences. Its aim is to harmonize and integrate data, services, and computational resources across diverse solid Earth science domains. These include Seismology, Near-Fault Observatories, GNSS Data and Products, Volcano Observations, Satellite Data, Geomagnetic Observations, Anthropogenic Hazards, Geological Information and Modeling, Multi-Scale Laboratories, Tsunami Research, each leveraging EPOS for the integration of domain specific data and services into a wider European multi-disciplinary context.

The EPOS platform ( https://www.epos-eu.org/dataportal) provides access to harmonized and quality-controlled data from thematic solid Earth science services through over 250 interoperable multidisciplinary services. The platform adopts a microservice-based architecture serving RESTful APIs, ensuring seamless interoperability between thematic core services (TCS) and the integrated core services central hub (ICS-C). The ICS-C, as the central system underpinning the EPOS platform, enables interoperability by adopting a multidimensional approach using metadata, semantics, and web services. Released under a GPL license as open-source software (https://epos-eu.github.io/epos-open-source/), EPOS adheres to the FAIR Principles, fostering interdisciplinary collaboration and technological advancement in Earth sciences and beyond.

In addition to data access, the EPOS platform also integrates complementary visualization tools and computational services. These Integrated Core Services - Distributed (ICS-D) enhance the user experience by simplifying complex interactions, offering functionalities like visualization, coding, and processing for data analysis, including machine learning applications.

This presentation will explore how the EPOS platform facilitates the entire research data lifecycle, connecting integrated multidisciplinary data provision to remote data analysis environments. By leveraging third-party cloud and supercomputing facilities equipped with specialized APIs (eg. SWIRRL https://gitlab.com/KNMI-OSS/swirrl/swirrl-api), we will demonstrate how EPOS seamlessly integrates with external services for reproducible data analysis and visualization, relying on common workflows to gather and pre-preprocess the data. External service examples include Jupyter Notebooks developed by domain-specific communities, using which the users can immediately analyze and process the data online. This adaptability streamlines scientific research and also promotes data reusability and collaboration within the portal, showcasing the EPOS platform's role in advancing Earth sciences research.

How to cite: Bailo, D., Paciello, R., Glaves, H., Roquencourt, J.-B., Molander, J., Spinuso, A., Langeland, T., Michalek, J., Lange, O., Sangianantoni, A., Bruyninx, C., and Freda, C. and the EPOS Group: The EPOS open source platform for multidisciplinary data integration and data analysis in solid Earth science, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5989, https://doi.org/10.5194/egusphere-egu24-5989, 2024.

14:35–14:45
|
EGU24-3937
|
On-site presentation
Angel Lopez Alos, Baudouin raoult, Ricardo correa, Andre obregon, Chris stewart, James varndell, Edward comyn-platt, Eduardo damasio da-costa, and Marcus Zanacchi

Since its official launch in 2018 supporting the implementation of the Copernicus Climate Change Service (C3S), the Climate Data Store (CDS) software infrastructure has evolved in many ways driven by an expanding catalogue of resources, a growing user community and the evolution of technologies and standards. On 2020 a twin instance, the Atmosphere Data Store (ADS), as support of the Atmosphere Monitoring Service (CAMS) was released. Since then, Infrastructure was renamed as Climate and Atmosphere Data Store (CADS). Combined, CDS and ADS, provide nowadays service to more than 270k registered users, delivering over 130 TBs of data on daily average in the form of more than 700k processed requests.

In 2024, a modernized CADS will take over. A configurable framework built on cloud oriented and state-of-the-art technologies providing more scalable, wider, and open access to data and services which will foster the engagement with a broader user community and will facilitate interaction with different platforms in the future EU Green Deal Data Space.

Despite changes, CADS foundational principles of simplicity and consistency remains along with FAIR. A rigorous content management methodology is at the core of the system, supported by automatic deployment tools and configuration files that range from web portal content to metadata, interactive forms, dynamic constraints, documentation, adaptors, and quality control. This versatile mechanism provides huge flexibility for adaptation to different standards and FAIR principles. 

In addition to improved capabilities for discovery, search and retrieve, the modernized system brings new or re-engineered components aiming to improve the usability of resources,  such as compliant OGC APIs, integrated and interactive Evaluation and Quality Control (EQC) function, open-source expert python packages (earthkit) for climate and meteorological purposes able to deploy & run anywhere, or Serverless Analysis-Ready Cloud Optimized (ARCO) Data and Metadata Services supporting responsive WMS/WMTS interfaces.

Modernization also involves the underlaying Cloud Infrastructure which aligned with the ECMWF’s Strategy for a Common Cloud Infrastructure (CCI) brings extended compute and storage resources and more importantly, closer, and efficient access to ECMWF resources, data, and services.

All new capabilities combined power a new generation of interactive user applications, training material, EQCs functions, and efficient access mechanisms to large data volumes driven among others by ML/AI requirements.  

Here we describe the new horizons that the modernized Data Store infrastructure open to users, introduce the broad spectrum of functionalities, open-source code, and material currently available and we open for debate the expectations and requirements that will foster the future evolution of the different components of the infrastructure.

How to cite: Lopez Alos, A., raoult, B., correa, R., obregon, A., stewart, C., varndell, J., comyn-platt, E., damasio da-costa, E., and Zanacchi, M.: New horizons for the Data Store Infrastructure at ECMWF, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3937, https://doi.org/10.5194/egusphere-egu24-3937, 2024.

14:45–14:55
|
EGU24-1724
|
ECS
|
On-site presentation
Marco Kulüke, Karsten Peters-von Gehlen, and Ivonne Anders

Climate science relies heavily on the effective creation, management, sharing, and analysis of massive and diverse datasets. As the digital landscape evolves, there is a growing need to establish a framework that ensures FAIRness in handling climate science digital objects. Especially, the machine-to-machine actionability of digital objects will be a crucial step towards future AI assisted workflows. Motivated by a use case, this contribution proposes the adoption of the Fair Digital Object (FDO) concept to address the challenges associated with the emerging spread in interdisciplinary reuse scenarios of climate model simulation output.

FDOs are encapsulations of data and their metadata made accessible via persistent identifiers (PIDs) in a way that data and their context will remain a complete unit as FDO travels through cyberspace and time. They represent a paradigm shift in data management, emphasizing the machine-actionability principles of FAIRness and the requirements enabling cross-disciplinary research. The FDO concept can be applied to various digital objects, including data, documents and software within different research disciplines and industry areas.

The aim of this work is to commit to an FDO standard in climate science that enables standardized and therefore automated data analysis workflows and facilitates the extraction and analysis of relevant weather and climate data by all stakeholders involved. The current work  expands on the efforts made to enable broad reuse of CMIP6 climate model data and focuses on requirements identified to enable automated processing of climate simulation output and their possible implementation strategies. The exemplary use case of an automated, prototypical climate model data analysis workflow will showcase the obstacles occuring when analyzing currently available climate model data. In particular, the findability of digital objects required for a particular research question in climate science or a related field shows to be challenging. In order to mitigate this issue, we propose certain strategies: (1) Enriching the PID profiles of climate model data in accordance with the FDO concept and taking into account the needs of the climate science community will lead to improved findability of digital objects, especially for machines. (2) Defining a standardized, unique association between climate model variables and their meaningful long names will increase the findability of climate model data, especially for researchers in other disciplines. (3) Furthermore, combining the FDO concept with existing data management solutions, such as the intake-esm catalogs, can lead to improved data handling in line with prevailing community practices.

Eventually, implementing an FDO standard will benefit the climate science community in several ways: The reusability of the data will facilitate the cost-effective use of existing computationally expensive climate model data. Improved data citation practices will promote data sharing, and ultimately, high transparency will increase the reproducibility of research workflows and consolidate scientific results.

How to cite: Kulüke, M., Peters-von Gehlen, K., and Anders, I.: Improving the Findability of Digital Objects in Climate Science by adopting the FDO concept, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1724, https://doi.org/10.5194/egusphere-egu24-1724, 2024.

14:55–15:05
|
EGU24-14052
|
On-site presentation
Lesley Wyborn, Nigel Rees, Jo Croucher, Hannes Hollmann, Rebecca Farrington, Benjamin Evans, Stephan Thiel, Mark Duffett, and Tim Rawling

Modern research data processing pipelines/workflows can have quite complex lineages. Today, it is more than likely that a scientific workflow will rely on multiple Research Infrastructures (RIs), numerous funding agencies and geographically separate organisations to collect, produce, process, analyse and reanalyse primary and derivative datasets. Workflow components can include:

  • Shared instruments to acquire the data;
  • Separate research groups processing/calibrating field data and developing additional derived products;
  • Multiple repository infrastructures to steward, preserve and provide access to the primary data and resultant products sustainably and persistently; and
  • Different types of software and compute infrastructures that enable multiple ways to access and process the data and products, including in-situ access, distributed web services and simple file downloads.

In these complex workflows, individual research products can be generated through multiple levels of processing (L0-L4), as raw instrument data is collected by remote instruments (satellites, drones, airborne instruments, shared laboratory and field infrastructures) and is converted into more useful parameters and formats to meet multiple use cases. Each individual level of processing can be undertaken by different research groups using a variety of funding sources and RIs, whilst derivative products could be stored in different repositories around the globe.

An additional complexity is that the volumes and resolution of modern earth and environmental datasets is exponentially growing and many RIs can no longer store and process the volumes of primary data acquired. Specialised hybrid HPC/Cloud infrastructures with co-located datasets that allow for virtual in situ high volume data access are emerging. But these petascale/exascale infrastructures are not required for all use cases, and traditional small volume file downloads of evolved data products and images for local processing are all that many users need. 

At the core of many of these complex workflows are the primary, often high resolution observational dataset that can be in the order of terabytes and petabytes. Hence for transparent Open Science and to enable attribution to funders, collectors and repositories that preserve these valuable data assets, all levels of all derivative data products need to be able to trace their provenance back to these source datasets.

Using examples from the recently completed 2030 Geophysics Data Collection project in Australia (co-funded by AuScope, NCI and ARDC), this paper will show how original primary field acquired datasets and their derivative products can be accessible from multiple distributed RIs and government websites. They are connected using the FAIR principles and ensure that at a minimum, lineage and prehistory is recorded in provenance statements and linked using metadata elements such as ‘isDerivedFrom’ and DOIs. Judicious use of identifiers such as ORCIDs, RORs and DOIs links data at each level of processing with the relevant researchers, research infrastructure, funders, software developers, software etc. Integrating HPC centers that are colocated with large volume high resolution data infrastructures within complex and configurable research workflows is providing a key input to supporting next-generation earth and environmental research and enabling new and exciting scientific discoveries. 

How to cite: Wyborn, L., Rees, N., Croucher, J., Hollmann, H., Farrington, R., Evans, B., Thiel, S., Duffett, M., and Rawling, T.: Who has got what where? FAIR-ly coordinating multiple levels of geophysical data products over distributed Research Infrastructures (RIs) to meet diverse computational needs and capabilities of users., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14052, https://doi.org/10.5194/egusphere-egu24-14052, 2024.

15:05–15:15
|
EGU24-16756
|
ECS
|
On-site presentation
Alba Brobia, Joan Maso, Ivette Serral, and Marie-Françoise Voidrot

In-situ Earth observation data play a key role in environmental and climate related domains. However, in-situ data is often missing or hardly accessible for users due to technical barriers, for example, unstructured metadata information, missing provenance, lack of links to standard vocabularies or units of measure definitions. This communication presents a well-defined, formalized methodology for identifying and documenting requirements for in-situ data from a user’s point of view initially tested within the Group on Earth Observations. This is materialized into a comprehensive Geospatial In-situ Requirements Database and a related tool called G-reqs.

The G-reqs facilitates the requirements gathering process via a web-form that acts as the user interface. It compasses a variety of Needs: Calibration/Validation of remote sensing products, Calibration/Validation of other in-situ data, input assessment for a numerical modeling, creation of an Essential Variable product, etc. Depending on the type of need, there will be requirements for in-situ data that can be formally expressed in the main components of the geospatial information: spatial, thematic, and temporal (e.g. area of scope, variable needed, thematic uncertainty, positional accuracy, temporal coverage and frequency, representative radius, coordinate measurements, etc). The G-reqs is the first in-situ data requirements repository at the service of the evolution of the GEO Work Programme but it is not limited to them. In fact, the entire Earth observation community of users is invited to provide entries to G-reqs. The requirements collected are technology-agnostic and neither takes into account the specific characteristics of any dedicated instrument nor sensors acquiring the data. The web-form based tool and the list of all validated requirements are FAIRly accessible in the G-reqs web site at https://www.g-reqs.grumets.cat/.

After a process of requirements gathering, the presented approach is aiming to discover where similar requirements across different scientific domains are shared, fostering in-situ data reusability, and guiding the priorities for the creation of new datasets by key in-situ data providers. For example, in-situ networks of observation facilities (ENVRI, e.g. ELTER, GEOBON, among others) are invited to direct their users to provide requirements to the G-reqs and participate in the analysis of the requirements, detect gaps in current data collection and formulate recommendations for the creation of new products or refine existing ones. The final aim is to improve the interoperability and accessibility of actionable in-situ Earth observation data and services, and its reuse.

This work is inspired by the OSAAP (formerly NOSA) from NOAA, the WMO/OSCAR requirements database and the Copernicus In-Situ Component Information System (CIS2) and developed under the InCASE project, funded by the European Environment Agency (EEA) in contribution to GEO and EuroGEO.

How to cite: Brobia, A., Maso, J., Serral, I., and Voidrot, M.-F.: G-reqs as a framework for defining precise, technology-agnostic, user-driven geospatial in-situ requirements. Towards a FAIR Global Earth Observation System of Systems without data gaps., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16756, https://doi.org/10.5194/egusphere-egu24-16756, 2024.

15:15–15:25
|
EGU24-15585
|
On-site presentation
Claudio De Luca, Massimo Orlandi, Manuela Bonano, Francesco Casu, Maddalena Iesuè, Michele Manunta, Giovanni Onorato, Mario Fernando Monterroso Tobar, Giancarlo Rivolta, and Riccardo Lanari

The current remote sensing scenario is nowadays characterized by an extensive exploitation of spaceborne Synthetic Aperture Radar (SAR) data to investigate the Earth surface dynamics. Such a request is rather well satisfied by the huge archives collected in the last ten years by the Copernicus Sentinel-1 (S1) SAR mission, which is distinguished by a “free and open” access data policy and a nearly global coverage acquisition strategy. In this regard, the most used space-borne geodetic technique for the investigation of the ground deformation is Differential Synthetic Aperture Radar Interferometry (DInSAR) that has largely demonstrated its effectiveness in measuring surface displacements in different scenarios. In particular, the advanced DInSAR method referred to as Parallel Small BAseline Subset (P-SBAS) approach has emerged as particularly effective to examine the temporal evolution of the detected surface displacements both in natural and anthropogenic hazard contexts, such as volcanoes, earthquakes, landslides and human-induced deformation due to mining activities, fluids exploitation, and large infrastructures construction.

In this context, the availability to the scientific community of algorithms and tools suitable to effectively exploit such huge SAR data archives, for generating value added products, is becoming crucial. To this aim, the P-SBAS algorithm has been released as an on-demand web-based tool by integrating it within the EarthConsole® platform, and currently contributes to the on-demand remote sensing component of the EPOSAR service. More in detail, EarthConsole® is a cloud-based platform supporting the scientific community with the development, testing, and hosting of their processing applications to enable Earth Observation (EO) data exploitation and processing services. EPOSAR, instead, is a service available in the framework of the European Plate Observing System (EPOS) Satellite community, which provides systematic ground displacement products relevant to various areas on Earth.

In this work we present the deployment of the P-SBAS tool within the EarthConsole® platform, in order to extend the EPOSAR service portfolio to the on-demand generation of DInSAR displacement maps and time series exploiting C-band satellite data. In particular, the developed service builds up on the already available capability to carry out a multi-temporal DInSAR processing of ENVISAT data and allow the scientific users to process also Sentinel-1 SAR images in a fully autonomous manner, through a user-friendly web graphical interface which permits to them to follow the progress of the processing tasks and to avoid the need of the SAR data download on their own processing and archiving facilities. The availability for the EPOS community of such an on-demand P-SBAS-based DInSAR processing service, which allows the scientific users to retrieve in an unsupervised way and in very short time ground displacement maps and time series relevant to large areas, may open new intriguing and unexpected perspectives to the comprehension of the Earth surface deformation dynamics.

How to cite: De Luca, C., Orlandi, M., Bonano, M., Casu, F., Iesuè, M., Manunta, M., Onorato, G., Monterroso Tobar, M. F., Rivolta, G., and Lanari, R.: On the exploitation of the Sentinel-1 P-SBAS service within the EarthConsole® platform for unsupervised on-demand DInSAR processing , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15585, https://doi.org/10.5194/egusphere-egu24-15585, 2024.

15:25–15:35
|
EGU24-12925
|
ECS
|
On-site presentation
Abdullah Al Faisal, Maxwell Kaye, and Eric Galbraith

Human activities have extensively modified over 70% of Earth’s land surface and two-thirds of marine environments through practices such as agriculture, industrialization, and urbanization. These activities have resulted in a wide range of environmental problems, including biodiversity loss, water pollution, soil erosion, and climate change. However, human data is often available only in tabular form, is difficult to integrate with natural Earth variables, and can pose significant challenges when trying to understand the complex integration between human activities and natural Earth systems. On the other hand, scientific datasets, which are spread across websites, come in different formats, may require preprocessing, use different map projections, spatial resolution, and non-standard units, are difficult for both beginner and experienced researchers to access and use due to their heterogeneity. This discrepancy hinders our understanding of complex interactions between human activities and the environment.

To bridge this gap, we have created the Surface Earth System Analysis and Modelling Environment (SESAME) software and dataset package, which aims to solve the problem of fragmented and difficult-to-use human-Earth data. It can handle various data formats and generate a standardized gridded dataset with minimal output. SESAME is a software infrastructure that automatically transforms five input data types (raster, point, line, polygon, and tabular) into standardized desired spatial grids and stores them in a netCDF file. The ability of a netCDF file to store multidimensional timeseries data makes it an ideal platform for storing complex global datasets. SESAME utilizes the dasymmetric mapping technique to transform jurisdiction-level tabular data into a gridded layer proportional to the corresponding surrogate variable while considering changes in country boundaries over time. It maintains the consistency between input and output data by calculating the global sum and mean.

By converting human tabular data into a gridded format, we can facilitate comprehensive and spatially explicit analyses, advancing our understanding of human-Earth systems and their complex interactions. These gridded datasets are intended to be used as inputs to a range of different Earth system models, potentially improving the simulation and evaluation of scenarios and leading to more informed and strategic future policy decisions.

How to cite: Faisal, A. A., Kaye, M., and Galbraith, E.: SESAME: Software tools for integrating Human - Earth System data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12925, https://doi.org/10.5194/egusphere-egu24-12925, 2024.

15:35–15:45
|
EGU24-12844
|
Virtual presentation
Margarita Segou, Kiratzi Anastasia, Carlo Cauzzi, Susana Custódio, Rémy Bossu, Florian Haslinger, Laurentiu Danciu, Fatemeh Jalayer, Roberto Basili, Irene Molinari, and Adrien Oth

We present the dynamic landscape of EPOS Seismology, a Thematic Core Service consortium at the foundation of the European Plate Observing System (EPOS) infrastructure. Cultivated over the past decade through partnerships with prominent pan-European seismological entities,  the ORFEUS (Observatories and Research Facilities for European Seismology), EMSC (Euro-Mediterranean Seismological Center), and EFEHR (European Facilities for Earthquake Hazard and Risk), EPOS Seismology stands out as a collaborative governance framework. Facilitating the harmonized interaction between seismological community services, EPOS, and its associated bodies, endeavors to widen the collaboration to include data management, product provision, and the evolution of new seismological services.

Within the EPOS Delivery Framework, EPOS Seismology pioneers a diverse array of services, fostering open access to a wealth of seismological data and products while unwaveringly adhering to the FAIR principles and promoting open data and science. These services encompass the archival and dissemination of seismic waveforms of  24,000 seismic stations, access to pertinent station and data quality information, parametric earthquake data spanning recent and historical events, as well as advanced event-specific products such as moment tensors and source models together with reference seismic hazard and risk for the Euro-Mediterranean region. 

The seismological services are seamlessly integrated into the interoperable centralized EPOS data infrastructure and are openly accessible through established domain-specific platforms and websites. Collaboratively orchestrated by EPOS Seismology and its participating organizations, this integration provides a cohesive framework for the ongoing and future development of these services within the extensive EPOS network. The products and services support the transformative role of seismological research infrastructures, showcasing their pivotal contributions to the evolving narrative of solid Earth science within the broader context of EPOS.

How to cite: Segou, M., Anastasia, K., Cauzzi, C., Custódio, S., Bossu, R., Haslinger, F., Danciu, L., Jalayer, F., Basili, R., Molinari, I., and Oth, A.: EPOS Seismology: Connecting Communities, Advancing Research, and Paving the Way Forward, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12844, https://doi.org/10.5194/egusphere-egu24-12844, 2024.

Coffee break
Chairpersons: Federica Tanlongo, Fabio Feriozzi, Jacco Konijn
Multidisciplinary Use Cases - Applying Practical Solutions
16:15–16:25
|
EGU24-1555
|
ECS
|
On-site presentation
Christoph Wohner, Alessandro Oggioni, Paolo Tagliolato, Franziska Taubert, Thomas Banitz, Sarah Venier, Philip Trembath, and Johannes Peterseil

The integrated European Long-Term Ecosystem, critical zone and socio-ecological Research (eLTER) is an emerging pan-European, in-situ Research Infrastructure (RI). Once fully established, it will serve multiple scientific communities with high-level central facilities and distributed well-instrumented eLTER sites. In the Horizon Europe project Biodiversity Digital Twin (BioDT), eLTER already plays the role of a provider for European datasets, in particular for the Grassland Dynamics prototype digital twin. Here, GRASSMIND, an individual- and process-based grassland model designed for simulating the structure and dynamics of species-rich herbaceous communities, including these communities’ responses to climate and management, is to be upscaled to model different local grassland sites across Europe. As the eLTER in-situ site network also comprises such grassland sites, the site registry DEIMS-SDR (deims.org) was used to identify relevant sites and contact the respective site managers and researchers to mobilise data. This selection process was aided by the machine-actionable data endpoints of eLTER also accessible using the Python and R packages, deimsPy and ReLTER, enabling script-based extraction and analysis. Collected and mobilised data is to be published on the persistent data storage B2Share and made centrally accessible through the eLTER central data node. Metadata about the resources is also available in RDF format, making them interlinked and accessible via a SPARQL endpoint. 

The data provided will enable stronger validation and improvements of the grassland simulations, and thus to better scientific insights and grassland management recommendations.

How to cite: Wohner, C., Oggioni, A., Tagliolato, P., Taubert, F., Banitz, T., Venier, S., Trembath, P., and Peterseil, J.: eLTER and its role of providing in-situ data to large scale research projects for modelling biodiversity dynamics, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1555, https://doi.org/10.5194/egusphere-egu24-1555, 2024.

16:25–16:35
|
EGU24-17596
|
On-site presentation
|
Claudio D'Onofrio, Ute Karstens, Alex Vermeulen, Oleg Mirzov, and Zois Zogopoulos

The ICOS Carbon Portal is the main data repository for the Integrated Carbon Observation System Research Infrastructure (ICOS RI), covering the domains Atmosphere, Ocean, and Ecosystems. Data from ICOS is available and accessible for humans and machines with a rich set of metadata under a CC BY 4.0 licence. The core services for the data portal (https://data.icos-cp.eu/portal/) are open-source software and are available on GitHub (https://github.com/ICOS-Carbon-Portal). The main goal for the development was to make the European greenhouse gas measurements accessible as FAIR as possible. This led to a mature and stable data portal which was subsequently adapted to be applied by another Research Infrastructure namely SITES, a national Swedish Infrastructure for Ecosystem Science, and the European Horizon 2020 project PAUL, pilot applications in urban landscapes (ICOS Cities). Although all three data portals use the same software core and are hosted at the ICOS Carbon Portal, they are independent from each other and base on slightly different ontologies. Hence, we have a unique opportunity to explore the challenges and opportunities of accessing and combining data from three or more different data sources and compare FAIR aspects of the datasets. How do we deal with attribution of the used data using correct citations? Do we have access to the licence for each data sets, are they different and what are the implications? How do we combine the data for further analysis keeping track of provenance and origin?

Further we will try to step back from the implementation of a service on specific data sets (which is kind of a hands-on bottom-up approach) and look at scalability to include other (environmental/ENVRI) data portals and think more about the top-down approach like the European Open Science Cloud EOSC. Can we offer a generalised service level for automated data processing from machine to machine? What do we need to process cross domain data sets?

How to cite: D'Onofrio, C., Karstens, U., Vermeulen, A., Mirzov, O., and Zogopoulos, Z.: Challenges and opportunities from an in-house cross collaboration between three research infrastructure data repositories, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17596, https://doi.org/10.5194/egusphere-egu24-17596, 2024.

16:35–16:45
|
EGU24-15957
|
ECS
|
On-site presentation
Ershad Gholamrezaie, Philip Buckland, Roger Mähler, Johan von Boer, Rebecka Weegar, Mattias Sjölander, and Carl-Erik Engqvist

The Newly formed Swedish National Infrastructure for Digital Archaeology (SweDigArch) and the Strategic Environmental Archaeology Database (SEAD) are positioned at the intersection of environmental research, data science and humanities. They represent a considerable upscaling of archaeological and Quaternary geological databases, combining meticulous data management, collaborative stewardship, advanced online interfaces, and visualization.

SweDigArch seeks to enhance the open accessibility of archaeological data from Swedish institutions, unlocking the knowledge embedded in cultural heritage and environmental repositories to facilitate interdisciplinary and international research. At its core, SweDigArch aims to enable data-driven analyses across diverse archaeological, palaeoecological, and related materials, including links to biodiversity and other external data sources. This initiative advances research on the intricate relationships between human societies and their environments over long timescales, empowering scholars to formulate inquiries that contribute not only to historical comprehension but also hold contemporary relevance and prospective implications.

In the pursuit of data-driven analyses, SweDigArch focuses on facilitating research which examines past human-environment interactions. Through the analysis of archaeological and recent geological datasets, the project endeavors to stimulate research providing insights into the functioning of socio-ecological systems, identifying historical vulnerabilities and resilience-building factors. This knowledge, in turn, will inform contemporary design, planning, and policy frameworks across various institutional and infrastructural domains, from environmental and cultural impact assessments to assessing risks from future climate change.

SweDigArch aims to optimize the utility of Swedish archaeological and palaeoecological data through linked data, open formats, shared vocabularies, and the semantic web. This approach enriches national and international research initiatives and facilitates cross-cultural comparative research, contributing to a broader understanding of global human history.

Integral to the collaborative framework is SEAD, an Open Access repository for proxy environmental data, including various archaeological and palaeoecological datasets. Incorporating datasets such as BugsCEP fossil insect data and Swedish data on plant macrofossils, pollen, dendrochronology, geochemistry, and ceramic thin sections, SEAD's evolving functionality now extends to accommodate osteological and isotope analyses, underscoring its role as a dynamic platform for data visualization and semantic networking.

Together, SweDigArch and SEAD aim to bridge the divide between academic and contract archaeology, offering a pivotal resource for cultural and environmental historical research, urban planning, and sustainability analyses. These initiatives aspire to become the standard primary data infrastructure for all users of Swedish archaeological information, transcending scholarly circles to encompass fields such as cultural heritage preservation and urban planning. This collaborative endeavor invites active engagement from a diverse user base, fostering a scholarly ethos of openness, data-driven inquiry, and interdisciplinary collaboration to deepen our comprehension of the past and contribute to the sustainable shaping of the future.

This presentation will describe the infrastructure and provide examples of its use in the analysis and visualization of interdisciplinary data, including fossil insects, past climate change and human impact on biodiversity and the environment.

How to cite: Gholamrezaie, E., Buckland, P., Mähler, R., von Boer, J., Weegar, R., Sjölander, M., and Engqvist, C.-E.: A Swedish National Infrastructure for Interdisciplinary Environmental Research Integrating Archaeological and Quaternary Geological Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15957, https://doi.org/10.5194/egusphere-egu24-15957, 2024.

16:45–16:55
|
EGU24-14521
|
On-site presentation
Jean-Philippe Malet, Maxime Lamare, Lucia Guardamino, Jonas Viehweger, Stefania Camici, Luca Brocca, Silvia Barbetta, Bianca Bonaccorsi, Sara Modanesi, Angelica Tarpanelli, Matteo Dall’Amico, Federico Di Paolo, Nicolo Franceschetti, Clément Michoud, Thierry Oppikoffer, David Michéa, Floriane Provost, Aline Déprez, Michaelis Foumelis, and Philippe Bally

The Alps are the most densely populated mountain range in Europe and water resources play a central role in the socio-economic developments of the area (agriculture, tourism, hydropower production...). Furthermore, the Alps are particularly sensitive to the impacts of climate change and thus to hydro-meteorological hazards such as landslides, floods, droughts and glacier related processes, which are expected to increase in the near future, constitute a major threat to human activity. Indeed, over the last century, temperatures have risen twice as fast as the northern-hemisphere average, whereas precipitation has increased non-linearly and has become more discontinuous.

Because of the increasing pressure on human settlements and infrastructure, there is a strong priority for policy-makers to implement climate change adaptation strategies from the local to the regional scale. To support and improve the decision-making process, numerical decision support systems may provide valuable information derived from multi-parametric (in-situ sensors, satellite data) observations and models, linked to computing environments, in order to better manage increasing threats and weaknesses.

The main objective of the Digital Twin of Alps (eg. DTA) platform is to provide a roadmap for the implementation of future Digital Twin Components, with a focus on the Alpine chain. In this context, a demonstrator has been developed that enables a holistic representation of some of the major physical processes specific to the Alpine context, powered by a unique combination of Earth Observation data analytics, machine learning algorithms, and state-of-the-art hydrology and geohazard process-based models. Advanced visualization tools have been specifically implemented to favor easy exploration of the products for several categories of stakeholders.

The resulting Digital Twin Earth precursor will provide an advanced decision support system for actors involved in the observation and mitigation of natural hazards and environmental risks including their impacts in the Alps, as well as the management of water resources. For instance, through the demonstrator users can investigate the availability of water resources in terms of snow, soil moisture, river discharge and precipitation. Furthermore, it is possible to stress the system with scenario based options to see the impacts on the various hydrological drivers in terms of drought and flood probability. Finally, the user can assess flood hazard, forecast (with a daily leading time) the occurrence of shallow landslides (slope failure probability and material propagation) and predict the activity (e.g. velocity) of large deep-seated and continuously active landslides from extreme rain events through the use of a combination of physics- and AI-based simulation tools. Use cases in Northern Italy, South Swiss and South France are provided.

Finally, the user can visualise maps and time series of terrain motion products over several Alpine regions generated with advanced Earth Observation processing chains and services (GDM-OPT, Snapping) available on the Geohazards Exploitation Platform and the eo4alps-landslides App, providing a consistent description of Earth surface deformation (unstable slopes, large deep-seated landslides, ice glacier) for the period 2016-2022. The data, services and technologies used and developed for the platform will be presented.

How to cite: Malet, J.-P., Lamare, M., Guardamino, L., Viehweger, J., Camici, S., Brocca, L., Barbetta, S., Bonaccorsi, B., Modanesi, S., Tarpanelli, A., Dall’Amico, M., Di Paolo, F., Franceschetti, N., Michoud, C., Oppikoffer, T., Michéa, D., Provost, F., Déprez, A., Foumelis, M., and Bally, P.: Towards a Digital Twin for the Alps to simulate water-related processes and geohazards for climate change adaptation strategies., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14521, https://doi.org/10.5194/egusphere-egu24-14521, 2024.

16:55–17:05
|
EGU24-16311
|
On-site presentation
Gwen Franck and Donatello Elia

The Horizon Europe interTwin project is developing a highly generic yet powerful Digital Twin Engine (DTE) to support interdisciplinary Digital Twins (DT). Comprising thirty-one high-profile scientific partner institutions, the project brings together infrastructure providers, technology providers, and DT use cases from Climate Research and Environmental Monitoring, High Energy and AstroParticle Physics, and Radio Astronomy. This group of experts enables the co-design of the DTE Blueprint Architecture and the prototype platform; benefiting end users like scientists and policymakers but also DT developers. It achieves this by significantly simplifying the process of creating and managing complex Digital Twins workflows.

In the context of the project, among others, Digital Twin (DT) applications for extreme events (such as tropical cyclones and wildfires) on climate projections are being implemented. Understanding how climate change affects extreme events is crucial since such events can have a significant impact on ecosystems, and cause economic losses and casualties. In particular, the DT applications are based on Machine Learning (ML) approaches for the detection and prediction of the events exploiting climate/environmental variables. The interTwin DTE is aimed at providing the software and computing infrastructure for handling these complex applications in terms of AI model, data processing and workflow management.

 

The contribution will cover the use cases concerning extreme weather events, supported by project partner CMCC. 

interTwin is funded by the European Union (Horizon Europe) under grant agreement No 101058386.

How to cite: Franck, G. and Elia, D.: The interTwin DTE: supporting the development of extreme weather events applications, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16311, https://doi.org/10.5194/egusphere-egu24-16311, 2024.

17:05–17:15
|
EGU24-9244
|
On-site presentation
Kostas Leptokaropoulos, Shubo Chakrabarti, and Fabrizio Antonio

The increasing volume and complexity of Earth and environmental data requires an efficient, interdisciplinary collaboration between scientists and data providers. This can be achieved by utilising research infrastructures providing advanced e-services exploiting data integration and interoperability, seamless machine-to-machine data exchange and HPC/ cloud facilities.  

In this contribution we will present a case study of geodata import, analysis and visualization, carried out on the ENES Data Space (https://enesdataspace.vm.fedcloud.eu), a cloud-enabled data science environment for climate data analysis built on top of the European Open Science Cloud (EOSC) Compute Platform. After joining the service by using an institutional or social media account, the site users can launch JupyterLab where they have access to a personal workspace as well as compute resources, tools and ready-to-use climate datasets, comprising past data recordings and future projections, mainly from the CMIP (Coupled Model Intercomparison Project) international effort. In this example, global precipitation data from CMCC experiments will be used. The analysis will be carried out within the ENES workspace in two different ways:

First, we will launch MATLAB Online from a web browser directly from the ENES Data Space JupyterLab where a Live Script (.mlx) will import, filter, and manipulate the data, create maps, compare results and perform hypothesis testing to evaluate the statistical significance of different outcomes. Live Scripts are notebooks that allow clear communication of research methods and objectives, combining data, hyperlinks, text and code and can include UI (User Interface) tools for point-and-click data processing and visualization, without the need for advanced programming skills.

Second, we will demonstrate the same process running the MATLAB kernel from a Jupyter notebook (.ipynb) in the same JupyterLab.

In both cases results can be exported in multiple formats (e.g., PDF, markdown, LaTeX, etc.), downloaded and shared with other researchers, students, and fellow educators. The entire process is carried out in MATLAB within the ENES Data Space environment with no need to install software or download data on the users’ local (non-cloud) devices.

How to cite: Leptokaropoulos, K., Chakrabarti, S., and Antonio, F.: Analysing open climate data - a case study using the MATLAB Integration for Jupyter on the ENES Data Space environment, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9244, https://doi.org/10.5194/egusphere-egu24-9244, 2024.

17:15–17:25
|
EGU24-18535
|
On-site presentation
Vaida Plankytė, Rory Macneil, Rorie Edmunds, and Noortje Haugstvedt

Overview

The International Generic Sample Number (IGSN ID), functionally a DataCite DOI, enables material samples from any discipline to be identified with a globally unique and persistent ID. 

This scalable FAIRification of samples enables transparent and traceable connections between a sample and other research entities, including (sub)samples, collections, instruments, grants, data, publications, people, and organizations. In 2023, support for the registration, metadata input, and publication of IGSN IDs was incorporated into the RSpace sample management system.

After introducing IGSN IDs, we overview the use case developed in collaboration with UiT The Arctic University of Norway regarding research workflows involved in geosciences field studies, and the corresponding IGSN ID and sample management functionality required to support these research workflows.

We then present our incorporation of IGSN IDs into RSpace as part of an institutional deployment solution for FAIR samples, detailing features and their various design considerations based on researcher needs.

Geosciences Use Case – UiT The Arctic University of Norway

A research group within the Department of Geosciences plans to assign IGSN IDs to samples collected during their 2024 field campaign in a remote Arctic area. The group needs to record basic structured sample information offline, while in the field. The institutional research data managers wish to increase sample visibility within the greater research community, ensure metadata format standardization, and facilitate metadata management by using an ELN with IGSN ID capabilities.

An offline field data collection tool, FieldMark, can be used to design powerful templates for metadata capture, and links IGSN IDs scanned from physical labels with rich metadata, including geolocation capture. Once back from the field, the sample metadata and templates, and their associated IGSN IDs, can be imported into RSpace, preserving format.

What is more, by assigning IGSN IDs to samples as well as features-of-interest, using instrument PIDs, and linking related entities, researchers model a rich PID graph that accurately portrays these relationships.

The samples are then utilized in active research: metadata editing as well as underlying template editing, linking experimental records and materials with samples, and inclusion of optional metadata fields are supported in RSpace.

Finally, the samples can be published alongside other materials, with RSpace generating a public metadata landing page for each sample containing both IGSN ID and domain-specific metadata. The IGSN ID metadata also becomes findable in DataCite’s records and API.

RSpace IGSN ID Features

We present the IGSN ID implementation in RSpace, including recent functionality:

  • Assigning ROR (Research Organization Registry) IDs to an RSpace instance, automatically populating IGSN metadata with affiliation information
  • Geolocation support for dynamic point, box, and polygon map previews alongside the coordinates on the public landing page
  • Ability to display domain-specific sample fields on the landing page to enable comprehensive metadata sharing

As well as upcoming work:

  • Integrating with other DataCite Service Providers to facilitate deposit of sample metadata into domain-specific repositories, analogous to ELN document export to repositories
  • Facilitating the use of singleton samples alongside batches of subsamples, while retaining the system’s ease of navigation and conceptual clarity

How to cite: Plankytė, V., Macneil, R., Edmunds, R., and Haugstvedt, N.: Using IGSN IDs in Geosciences Sample Management with RSpace: Use Case & Workflows, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18535, https://doi.org/10.5194/egusphere-egu24-18535, 2024.

17:25–17:35
|
EGU24-8255
|
ECS
|
On-site presentation
Helena Ciechowska, Łukasz Rudziński, Beata Orlecka-Sikora, Alessandro Vuan, Anastasios Kostoglou, and Aderson Farias do Nascimento

The man-made alteration to the environment can become a source of seismic activity. The Castanhão region (Ceará, NE Brazil) can pose as an example of such. The Castanhão Reservoir was created as a result of dam construction over the Jaguaribe River, which triggered the occurrence of earthquake swarms on the site. 

In the following study, we aim to analyze the data and understand the seismic mechanism behind the seismic activity in the Castanhão region. Such study required an interdisciplinary approach employing data from various disciplines such as seismology, geology, geomechanics, and hydrology. The starting data set contains continuous waveforms recorded on 6 seismological stations from January to December 2010. The two detection algorithms were applied for earthquake detection. Initial detection was performed with the use of the STA/LTA algorithm, which allowed for the preparation of 53 templates with a good S/N ratio. Further, in the frequency range from 5 to 100 Hz, the input templates were used to match self-similar events to augment the initial catalog.

Due to the station coverage and low magnitude of the events, the detailed analysis of the quakes is performed on 187 events out of over 300 that were detected during PyMPA template matching. The localization was performed using Hypo71 software and analysis of mechanisms is done with the KiwiTool. 

The Castanhão EPISODE is planned to be made available on the EPISODES Platform of EPOS Thematic Core Services Anthropogenic Hazards.

How to cite: Ciechowska, H., Rudziński, Ł., Orlecka-Sikora, B., Vuan, A., Kostoglou, A., and Nascimento, A. F. D.: The Castanhão EPISODE - the case study of Reservoir Induced Seismicity (RIS) in NE Brazil., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8255, https://doi.org/10.5194/egusphere-egu24-8255, 2024.

17:35–17:45
|
EGU24-13367
|
On-site presentation
Rui Fernandes, Carine Bruyninx, Luis Carvalho, Paul Crocker, Gael Janex, Juliette Legrand, Jean-Luc Menut, Anne Socquet, and Mathilde Vergnolle and the EPOS-GNSS Contributors

As the European Plate Observing System (EPOS) transitions into its Operational Phase, the EPOS-GNSS Thematic Core Service continues to play a pivotal role in managing and disseminating Global Navigation Satellite Systems (GNSS) data and products across Europe. As EPOS-GNSS advances into its operational stage, the commitment to organizational effectiveness and technical innovation has been reinforced. This ensures that EPOS-GNSS continues to provide valuable services and products tailored for Solid Earth research applications.

In this presentation, we highlight key developments achieved during the pre-operational phase and the ongoing operational status where evolution continues to be a central component for the EPOS-GNSS community. The four critical pillars of EPOS-GNSS are discussed: (a) Governance – we have Intensified efforts to ensure the representation and recognition of the entire community as well as deepening collaboration with data providers, end-users, and pan-European infrastructures, notably EUREF; (b) Metadata and Data – the dissemination of quality controlled GNSS data and associated metadata has been integrated into the operational framework; (c) Products – internally consistent GNSS solutions of dedicated products (time-series, velocities, and strain-rates) using state-of-art methodologies; and (d) Software – GLASS, the dedicated software package that facilitates the dissemination of GNSS data and products using FAIR principles while maintaining rigorous quality control procedures through four different GNSS dedicated web portals and the EPOS Integrated Core Services Data Portal.

Finally, we also present some examples of the usage of the EPOS-GNSS Data Products in multi-, inter-, and trans-disciplinaries studies where we exhibit the importance of the geodetic information for Solid Earth studies particularly in an integrated environment as promoted by EPOS.

How to cite: Fernandes, R., Bruyninx, C., Carvalho, L., Crocker, P., Janex, G., Legrand, J., Menut, J.-L., Socquet, A., and Vergnolle, M. and the EPOS-GNSS Contributors: EPOS-GNSS – Operational Advancements in EPOS GNSS Data and Product Services, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13367, https://doi.org/10.5194/egusphere-egu24-13367, 2024.

17:45–17:55
|
EGU24-17229
|
Virtual presentation
Lydie Gailler, Philippe Labazuy, Romain Guillard, Solène Buvat, Clément Grace, Erwan Thébault, and Edouard Régis and the ERT3D Scan4Volc

Our understanding of dynamic volcanic processes (fluid transfers at depth and eruptions, collapses and sliding, etc.) relies directly on our knowledge of the geometries of magmatic and hydrothermal systems, mechanical heterogeneities and how these structures evolve in time. Imaging the internal structure and temporal dynamics of volcanoes still represents a real challenge to univocally identify the processes that govern their evolution, including eruptive precursors, instabilities phenomena, surface manifestations and their repercussions. It is therefore necessary to more rigorously constrain the geometry and the spatio-temporal dynamics of these structures, and their activation at different depths.

The behaviour of these structural volcanic features strongly depends on physical parameters such as temperature and fluid composition that can be assessed using a range of complementary ground and remote observations. Among these, geophysical methods provide images of the internal structure, which can subsequently be translated in terms of geological structure and evolution. Such constraints are also necessary to provide more realistic numerical models. Recent improvements to the available suite of the instrumentation for volcanological studies, including field geophysics (ground and airborne-Unmanned Aerial Vehicles, UAVs), remote sensing methods and numerical capabilities, allows us to build even more comprehensive analyses of such terrestrial phenomena. In addition, combining several spatial (local and more regional) and temporal scales (one-off studies, time lapse through reiterations, time series) help to better follow the dynamics of the edifices, anticipate eruptive crises and associated hazards.

Here we focus on the highly active and well monitored Piton de la Fournaise laboratory volcano, which is an excellent case study to develop and apply new methodologies in order to address both scientific and societal issues. Amongst the most significant parameters, recent studies have evidenced the potential of magnetic field measurements in imaging thermal anomalies (strong influence of temperature on magnetic measurements) and mechanical heterogeneities (fracturing-alteration at depth). Electrical resistivity is also a powerful tool in volcanic contexts, being very sensitive to fluid contents and particularly well suited to image the shallow structure of a volcanic edifice through, for example, innovative 3D surveys, or more in-depth using magnetotellurics measurements. Based on the analysis of combined recent reiterations of ground magnetic measurements, UAV magnetic and thermal infrared acquisitions, as well as high resolution electrical resistivity measurements, we focus on the 3D structure and recent evolution of the summit activity at Piton de la Fournaise, using additional constraints such as seismicity and deformation (InSAR inverse modelling).

This study confirms that detecting resistivity and magnetization anomalies, and quantifying their spatiotemporal evolution, can provide powerful tools for imaging volcanic systems at various scales and for providing warning of associated hazards. It also highlights the necessity for 4D monitoring of volcanic edifices using this method to provide greater precision, an important issue that is now made possible using UAV and near real time analyses.

These observational datasets aim to be integrated in open databases distributed through French and European research structures and infrastructures, namely the National Volcanology Observation Service (CNRS-INSU), Epos-France and Data Terra Research Infrastructures, as well as the EPOS VOLC-TCS.

How to cite: Gailler, L., Labazuy, P., Guillard, R., Buvat, S., Grace, C., Thébault, E., and Régis, E. and the ERT3D Scan4Volc: Active structures and thermal state of the Piton de la Fournaise summit revealed by multi-methods high resolution imaging, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17229, https://doi.org/10.5194/egusphere-egu24-17229, 2024.

17:55–18:00

Posters on site: Wed, 17 Apr, 10:45–12:30 | Hall X4

Display time: Wed, 17 Apr 08:30–Wed, 17 Apr 12:30
Chairpersons: Angeliki Adamaki, Federica Tanlongo, Jacco Konijn
X4.176
|
EGU24-6798
|
ECS
Daoye Zhu, Yuhong He, and Kent Moore

Currently, effectively managing, retrieving, and applying environmental big data (EBD) presents a considerable challenge owing to the abundant influx of heterogeneous, fragmented, and real-time information. The existing network domain name system lacks the spatial attribute mining necessary for handling EBD, while the geographic region name system proves inadequate in achieving EBD interoperability. EBD integration faces challenges arising from diverse sources and formats. Interoperability gaps hinder seamless collaboration among systems, impacting the efficiency of data analysis.

To address the need for unified organization of EBD, precise man-machine collaborative spatial cognition, and EBD interoperability, this paper introduces the EBD grid region name model based on the GeoSOT global subdivision grid framework (EGRN-GeoSOT). EGRN-GeoSOT effectively manages location identification codes from various sources, ensuring the independence of location identification while facilitating correlation, seamless integration, and spatial interoperability of EBD. The model comprises the grid integration method of EBD (GIGE) and the grid interoperability method of EBD (GIOE), providing an approach to enhance the organization and interoperability of diverse environmental datasets. By discretizing the Earth's surface into a uniform grid, GIGE enables standardized geospatial referencing, simplifying data integration from various sources. The integration process involves the aggregation of disparate environmental data types, including satellite imagery, sensor readings, and climate model outputs. GIGE creates a unified representation of the environment, allowing for a comprehensive understanding of complex interactions and patterns. GIOE ensures interoperability by providing a common spatial language, facilitating the fusion of heterogeneous environmental datasets. The multi-scale characteristic of GeoSOT allows for scalable adaptability to emerging environmental monitoring needs.

EGRN-GeoSOT establishes a standardized framework that enhances integration, promotes interoperability, and empowers collaborative environmental analysis. To verify the feasibility and retrieval efficiency of EGRN-GeoSOT, Oracle and PostgreSQL databases were combined and the retrieval efficiency and database capacity were compared with the corresponding spatial databases, Oracle Spatial and PostgreSQL + PostGIS, respectively. The experimental results showed that EGRN-GeoSOT not only ensures a reasonable capacity consumption of the database but also has higher retrieval efficiency for EBD.

How to cite: Zhu, D., He, Y., and Moore, K.: Novel environmental big data grid integration and interoperability model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6798, https://doi.org/10.5194/egusphere-egu24-6798, 2024.

X4.177
|
EGU24-12230
|
Barbara Magagna, Marek Suchánek, and Tobias Kuhn

Central for research is the capability to build on existing research outcomes and to aggregate data from different sources to create new research findings. This is particularly true for environmental research, which tries to face global challenges like climate change and biodiversity loss by integrating diverse long-term monitoring and experimental data.

Interoperability is the ability of computer systems to exchange information but to get a shared understanding of the meaning of that information semantic interoperability is required. Shared understanding between all parties involved can be achieved using common standards like vocabularies, metadata and semantic models.

But how can researchers find out which standards are used and by whom? FAIR Implementation Profiles (FIPs), co-developed by GO FAIR Foundation and ENVRI-FAIR in 2020 (https://doi.org/10.1007/978-3-030-65847-2_13) and used by more than 120 communities so far like ENVRIs and WorldFAIR (see also https://fairdo.org/wg/fdo-fipp/), might be a good source of knowledge. This socio-technical approach drives explicit and systematic community agreements on the use of FAIR implementations including domain-relevant community standards, called FAIR-Enabling Resources. The FIP Wizard (https://fip-wizard.ds-wizard.org/) is implemented through the DSW open-source tool as a user interface by which the researcher is asked to answer questions related to each of the Principles by selecting FERs expressed as nanopublications. A nanopublication (https://nanopub.net/) is represented as a machine-interpretable knowledge graph and includes three elements: assertions, provenance, and publication info where in the context of FIPs the assertion contains essential metadata about a FER.

Using the same approach and technology but focusing on semantic interoperability aspects the Semantic Interoperability Profile (SIP) was developed in the context of the EOSC Semantic Interoperability Task Force to interview semantic or data management experts involved in research projects or infrastructures to collectively contribute to a knowledge base of interoperability solutions (https://doi.org/10.5281/zenodo.8102786). The SIP focuses on standards used to implement the Principle F2 (metadata) and the Interoperability Principles (I1, I2, I3 related to semantic artefacts) but queries also about the services used to generate, edit, publish, and transform them, altogether called FAIR Supporting Resources (FSRs). The survey is an ongoing effort and everybody can contribute to it via the SIP Wizard (https://sip-wizard.ds-wizard.org/). In summary, a SIP is a machine-interpretable collection of resources chosen by a community whereby the collection can be made specific for a data type and a semantic interoperability case study. 

FAIR Connect (https://fairconnect.pro/) is being developed to provide a user-friendly, graphics rich dashboard and search engine on nanopublications of type FSR. It will enable users to find FSRs based on its type or label and will inform at the same time by which communities it is used. In a future iteration it will also enable filters on data types and case studies.   

How to cite: Magagna, B., Suchánek, M., and Kuhn, T.: Semantic Interoperability Profiles as knowledge base for semantic solutions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12230, https://doi.org/10.5194/egusphere-egu24-12230, 2024.

X4.178
|
EGU24-20168
Approaches to semantic interoperability in the ITINERIS project
(withdrawn)
Cristina Di Muri, Davide Raho, Alexandra N. Muresan, Martina Pulieri, and Ilaria Rosati
X4.179
|
EGU24-19379
Peter Baumann

Earth data are an archetypical case of Big Data, in all their Volume, Velocity, Variety, and Veracity challenges. Since long, therefore, standardization in ISO, OGC, and further bodies is concerned with developing and advancing specifications for structures and services suitable for Big Data. Questions to keep in mind include:
- How can data wrangling be simplified, for example through better suited concepts and elimination of unnecessary technicalities?
- How can specifications support scalable implementations?
- What is necessary to make data ready for analysis, more generally: for the various types of consumption?

Specifically, the commonly accepted concept of multi-dimensional coverages - corresponding to the notion of spatio-temporal "fields" in physics" - addresses Big Data, in practice: regular and irregular grids, point clouds, and general meshes. Recently, ISO has adopted two "abstract" standards with normative definitions of coverage concepts and terminology. 19123-1 addresses the coverage data model whereas 19123-3 is about coverage processing fundamentals, utilizing the OGC Web Coverage Processing Service (WCPS) model. OGC has adopted both as an update to its Abstract Topic 6.

On the level of "concrete" specifications directed towards implementation and conformance testing there is the joint OGC/ISO Coverage Impementation Schema. In OGC the current version is 1.1 which introduces the General Grid Coverage as a more powerful, yet simplified structure for regular and irregular grids. ISO has commenced work on updating its 19123-2, which is still based on OGC CIS 1.0), with  CIS 1.1. On the processing side, there are various activities in addition to the proven, mature WCS and WCPS, such as drafts for OAPI-Coverages and GeoDataCube.

We present the current status of ISO and OGC standardization work on coverages. The author is active as editor of adopted and in-progress standards in OGC and ISO since over 15 years, and intimately familiar with standardization work there. By sharing the status and plans of standardization the talk provides an opportunity for the community to comment on plans and share any comments and suggestions.

 

How to cite: Baumann, P.: Status and Planning Update on Big Data Standardization in OGC and ISO, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19379, https://doi.org/10.5194/egusphere-egu24-19379, 2024.

X4.180
|
EGU24-17978
Gabriel Pelouze, Spiros Koulouzis, and Zhiming Zhao

Studying many scientific problems, such as environmental challenges or cancer diagnosis, requires extensive data, advanced models, and distributed computing resources. Researchers often reuse assets (e.g. data, AI models, workflows, and services) from different parties to tackle these issues. This requires effective collaborative environments that enable advanced data science research: discovery access, interoperation and reuse of research assets, and integration of all resources into cohesive observational, experimental, and simulation investigations with replicable workflows. Such use cases can be effectively supported by Virtual Research Environments (VREs). Existing VRE solutions are often built with preconfigured data sources, software tools, and functional components for managing research activities. While such integrated solutions can effectively serve a specific scientific community, they often lack flexibility and require significant time investment to use external assets, build new tools, or integrate with other services. In contrast, many researchers and data scientists are familiar with notebook environments such as Jupyter. 

We propose a VRE solution for Jupyter to bridge this gap: Notebook-as-a-VRE (NaaVRE). At its core, NaaVRE allows users to build functional blocks by containerizing cells of notebooks, composing them into workflows, and managing the lifecycle of experiments and resulting data. The functional blocks, workflows, and resulting datasets can be shared to a common marketplace, enabling the creation of communities of users and customized VREs. Furthermore, NaaVRE integrates with external sources, allowing users to search, select, and reuse assets such as data, software, and algorithms. Finally, NaaVRE natively works with modern cloud technologies, making it possible to use compute resources flexibly and cost-effectively.

We demonstrate the versatility of NaaVRE by building several customized VREs that support legacy scientific workflows from different communities. This includes the derivation of ecosystem structure from Light Detection and Ranging (LiDAR) data, the tracking of bird migrations from radar observations, and the characterization of phytoplankton species. The NaaVRE is also being used to build Digital Twins of ecosystems in the Dutch NWO LTER-LIFE project.

How to cite: Pelouze, G., Koulouzis, S., and Zhao, Z.: Notebook-as-a-VRE (NaaVRE): From private notebooks to a collaborative cloud virtual research environment, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17978, https://doi.org/10.5194/egusphere-egu24-17978, 2024.

X4.181
|
EGU24-19358
|
ECS
Na Li, Peide Zhu, Gabriel Gabriel Pelouze, Spiros Koulouzis, Zhiming Zhao, and Zhiming Zhao

Data science and Machine learning techniques are increasingly important in tackling societal challenges and complex problems in environmental and earth sciences. Effectively sharing and (re)using research assets, including data sets, models, software tools, documents, and computing notebooks, are crucial for such data-centric research activities. Researchers can reproduce others’ experiments with the available research assets to verify the results and to further develop new experiments. Computational notebooks, as an important asset, comprise free-form textual descriptions and code snippets. The notebook runtime environments, e.g., Jupyter, provide scientists with an interactive environment to construct and share the experiment descriptions and code as notebooks. 

 

To enable effective research assets discovery, research infrastructures not only need to FAIRify the assets with rich meta information and unique identifiers but also provide search functionality and tools to facilitate the construction of scientific workflows for the data sciences experiments with research assets from multiple sources. The general-purpose search engines are helpful for initial coarse-grained search but often fail to find multiple types of research assets such as the data sets and notebooks needed by the research. The community-specific catalogues, e.g., in ICOS and LifeWatch, provide search capabilities for precisely discovering data sets, but they are often characterized by a specific type of asset. A researcher has to spend lots of time searching across multiple catalogues to discover all types of assets needed. 

In the search process, user queries tend to be short and comprised of several key phrases that demand great efforts to understand users’ information needs. Given the complexity of computational notebook contents and the mismatch between the form of user queries and the computational notebooks, it is critical to understand queries by augmentations and make explainable relevance judgments. To address these challenges, we developed a research asset search system for a Jupyter notebook-based Virtual Research Environment (called Notebook as a VRE) that supports scientific query understanding with query reformulation and explainable computational notebook relevance judgments via computational notebook summarization. 

The proposed system includes four major components: the query reformulation module, the notebook indexer and retriever, the summarization component, and the user interface.  The query reformulation module performs active query understanding via query reformulation, where we extract scientific entities from user queries and search related entities from external knowledge graphs and resources as expansions and rank the reformulated queries for users to choose from. The system has been validated via a small user group and will be further developed in the coming ENVRI-HUB next project to support conversational search and recommendation. 

How to cite: Li, N., Zhu, P., Gabriel Pelouze, G., Koulouzis, S., Zhao, Z., and Zhao, Z.: Research Notebook Retrieval with Explainable Query Reformulation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19358, https://doi.org/10.5194/egusphere-egu24-19358, 2024.

X4.182
|
EGU24-7290
Dick M. A. Schaap, Tjerk Krijger, Sara Pittonet, and Pasquale Pagano

The pilot Blue-Cloud project as part of ‘The Future of Seas and Oceans Flagship Initiative’ of EU HORIZON 2020 combined interests of developing a thematic marine EOSC cloud and serving the Blue Economy, Marine Environment and Marine Knowledge agendas. It deployed a versatile cyber platform with smart federation of multidisciplinary data repositories, analytical tools, and computing facilities in support of exploring and demonstrating the potential of cloud based open science for ocean sustainability, UN Decade of the Oceans, and G7 Future of the Oceans. The pilot Blue-Cloud delivered:

  • Blue-Cloud Data Discovery & Access service (DD&AS), federating key European data management infrastructures, to facilitate users in finding and retrieving multi-disciplinary datasets from multiple repositories
  • Blue-Cloud Virtual Research Environment infrastructure (VRE) providing a range of services and facilitating orchestration of computing and analytical services for constructing, hosting and operating Virtual Labs for specific applications
  • Five multi-disciplinary Blue-Cloud Virtual Labs (VLabs), configured with specific analytical workflows, targeting major scientific challenges, and serving as real-life Demonstrators, which can be adopted and adapted for other inputs and analyses.    

Since early 2023, Blue-Cloud 2026 aims at a further evolution into a Federated European Ecosystem to deliver FAIR & Open data and analytical services, instrumental for deepening research of oceans, EU seas, coastal & inland waters.

The DD&AS already federates leading Blue Data Infrastructures, such as EMODnet, SeaDataNet, Argo, EuroArgo, ICOS, SOCAT, EcoTaxa, ELIXIR-ENA, and EurOBIS, and facilitates common discovery and access to more than 10 million marine datasets for physics, chemistry, geology, bathymetry, biology, biodiversity, and genomics. It is fully based on machine-to-machine brokering interactions with web services as provided and operated by the Blue Data Infrastructures. As part of Blue-Cloud 2026 it will expand by federating more leading European Aquatic Data Infrastructures, work on improving the FAIRness of the underpinning web services, incorporating semantic brokering, and adding data subsetting query services.

The Blue-Cloud VRE, powered by D4Science, facilitates collaborative research offering computing, storage, analytical, and generic services for constructing, hosting and operating analytical workflows for specific applications. Blue-Cloud 2026 will expand the VRE by federating multiple e-infrastructures as provided EGI, Copernicus WEkEO, and EUDAT. This way, it will also open the connectivity to applications as developed in other EU projects such as iMAGINE (AI applications for marine domain), and EGI-ACE (applications for ocean use cases).

During EGU we will share insight in the solutions regarding semantics supporting interoperability and harmonised data access. This will be especially illustrated via developments of new Blue-Cloud analytical Big Data “WorkBenches” that are generating harmonised and validated data collections of Essential Ocean Variables (EOVs) in physics (temperature and salinity), chemistry (nutrients, chlorophyll, oxygen) and biology (plankton taxonomy, functions and biomass). The access to harmonised subsets of the BDI’s data collections will be supported by new tools like BEACON and the I-Adopt framework. The EOV collections are highly relevant for analysing the state of the environment.  This way, Blue-Cloud 2026 will provide a core data service for the Digital Twin of the Ocean, EMODnet, Copernicus, and various research communities.

How to cite: Schaap, D. M. A., Krijger, T., Pittonet, S., and Pagano, P.: Blue-Cloud 2026, services to deliver, access and analyse FAIR & Open marine data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7290, https://doi.org/10.5194/egusphere-egu24-7290, 2024.

X4.183
|
EGU24-7391
Mariusz Majdanski, Iris Christadler, Giuseppe Puglisi, Jan Michalek, Stefanie Weege, Fabrice Cotton, Angelo Strollo, Mateus Prestes, Helle Pedersen, Laurentiu Danciu, Marc Urvois, Stefano Lorito, Daniele Bailo, Otto Lange, and Gaetano Festa

The Geo-INQUIRE (Geosphere INfrastructure for QUestions into Integrated REsearch) project, supported by the Horizon Europe Programme, is aimed at enhancing the Earth Sciences Research Infrastructures and services to make data and high-level products accessible to the broad Geoscience scientific community. Geo-INQUIRE’s goal is to encourage curiosity-driven studies into understanding the Geosystem processes at the interface between the solid Earth, the oceans and the atmosphere using big data collections, high-performance computing methods and cutting-edge facilities.

The project has a strong focus on supporting dynamic development of the actual use of research infrastructures. Training, networking, and community-building activities will be key to foster it. The methodology ensures empowering participation of both young and experienced researchers, also from often underrepresented communities, but also incorporates new and intersectional perspectives, while addressing current major environmental and economic challenges and fertilising curiosity-driven, cross-disciplinary research.

The project dissemination activities include a series of open online training and more specialised on-site workshops focused on data, data products and software solutions. Researchers, early-stage scientists, students are communities which will be able to explore the various fields of geosphere-related science, also not directly related to their field, with the possible connection through Research Infrastructures. Through lectures and use cases, we expect to show and teach them how to use data and information coming from cross-disciplinary RIs. We would like to increase the awareness of the capacity and capabilities of “other” RIs, as well as data integration and importance of FAIR principles. The training offer is constantly updated on the project web page www.geo-inquire.eu.

In addition, two summer schools will be organised, dedicated to cross-disciplinary interactions of solid Earth with marine science and with atmospheric physics. The first school will be organised in autumn 2024 in Gulf of Corinth (Greece), and the second one in autumn 2025 in Catania, Sicily (Italy).

The applications for training activities will be evaluated by a panel that reviews the technical and scientific feasibility of the proposed application project, ensuring equal opportunities and diversity in terms of gender, geographical distribution and career stage. The data and products generated during the Transnational Accesses to research facilities will be made available to the scientific community via the project strict adherence to FAIR principles.

Geo-INQUIRE is funded by the European Commission under project number 101058518 within the HORIZON-INFRA-2021-SERV-01 call.

How to cite: Majdanski, M., Christadler, I., Puglisi, G., Michalek, J., Weege, S., Cotton, F., Strollo, A., Prestes, M., Pedersen, H., Danciu, L., Urvois, M., Lorito, S., Bailo, D., Lange, O., and Festa, G.: Fostering cross-disciplinary research - Training, workshops and summer schools of Geo-INQUIRE EU-project, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7391, https://doi.org/10.5194/egusphere-egu24-7391, 2024.

X4.184
|
EGU24-9486
|
ECS
Giuseppe Vico, Rita Chiara Taccone, Francesco Emanuele Maesano, Mara Monica Tiberti, and Roberto Basili

Earthquakes of engineering significance (magnitude 5 and above) are generated by pre-existing, relatively mature geological faults. These faults generally span a length from a few to several tens or hundreds of kilometers and can break the entire Earth’s crust.   

Defining the three-dimensional configuration of such seismogenic faults is crucial for developing applications for earthquake hazard analyses at different spatial scales and, in turn, contributing robust information to promoting earthquake risk mitigation strategies.

The reconstruction of geological fault surfaces is a typical multidisciplinary study involving a large variety of data types and processing methods that, inevitably, imply various degrees of geometric simplifications depending on the available data. Among them, the most powerful, although expensive, approaches are the techniques developed for hydrocarbon exploration, namely seismic reflection (2D-3D) data combined with logs of drilled wells, which can illuminate the Earth’s subsurface at several kilometers depth. The mining and oil and gas industries have historically collected a large amount of this data, which remained classified depending on the regulations of the country from which they obtained the license for exploration. As time passes, and with the waning of fossil fuel exploitation, the exploration licenses expire or are not renovated, and more of such data becomes available to amalgamate with data collected by research institutions or public/private ventures using public funding. 

Despite the vast literature on and applications of hydrocarbon exploration data, no standard procedure exists for documenting the use of such data in characterizing seismogenic faults. In this respect, scientists face challenges posed by the intersection of industry data with public research outputs, with important societal implications and barriers to ensuring FAIRness. To this end, we devised a workflow detailing the best practices to follow in the various steps geologists undertake in using hydrocarbon exploration data, starting from the source of the raw/processed data (public vs confidential) and ending with the final geological fault model. The workflow output is then ready to be integrated with the information and data from other scientific disciplines (e.g., seismology, paleoseismology, tectonic geomorphology, geodesy, geomechanical modeling, earthquake statistics) to obtain the most reliable seismogenic fault model. As proof of concept, we will present a simplified version of a software tool that guides the user in incorporating the workflow's various elements into a structured database of seismogenic faults.

How to cite: Vico, G., Taccone, R. C., Maesano, F. E., Tiberti, M. M., and Basili, R.: Best practices for using and reporting subsurface geological/geophysical data in defining and documenting seismogenic faults., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9486, https://doi.org/10.5194/egusphere-egu24-9486, 2024.

X4.185
|
EGU24-10742
Shane Murphy, Gaetano Festa, Stefano Lorito, Volker Röhling, Fabrice Cotton, Angelo Strollo, Marc Urvois, Andrey Babeyko, Daniele Bailo, Jan Michalek, Otto Lange, Javier Quinteros, Mariusz Majdanski, Iris Christadler, Mateus Prestes, and Stefanie Weege

The Geo-INQUIRE (Geosphere INfrastructure for QUestions into Integrated REsearch) project, supported by the Horizon Europe Programme, is aimed at enhancing services to make data and high-level products accessible to the broad Geoscience scientific community. Geo-INQUIRE’s goal is to encourage curiosity-driven studies into understanding the geosphere dynamics at the interface between the solid Earth, the oceans and the atmosphere using long data streams, high-performance computing and cutting-edge facilities. 

The Geo-INQUIRE Transnational Access (TA) covers both virtual and on-site access to a variety of state of the art laboratories, facilities, experimental sites (testbeds) and computational resources with the aim of enabling the development of excellent ground-breaking science. Six research infrastructures located across Europe, referred to as “testbeds”, will provide locations for users to perform experiments in a variety of environments from the Earth’s surface (both on land and at sea) to the subsurface; over different spatial scales: from small-scale experiments in laboratories to kilometric submarine fibre cables. These sites are: the Bedretto Laboratory (Switzerland); the Ella-Link Geolab (Portugal); the Liguria-Nice-Monaco submarine infrastructure (Italy/France); the Irpinia Near-Fault Observatory (Italy); the Eastern Sicily facility (Italy); and the Corinth Rift Laboratory (Greece). In addition, ECCSEL-ERIC is providing access to 5 of its research facilities focussing on CO2 Capture, Utilisation, Transport and Storage. The facilities providing access are: Svelvik CO2 Field Lab (Norway), PITOP Borehole Geophysical Test Site (Italy), Sotacarbo Fault Laboratory (Italy), Catenoy experimental site and gas-water-rock interaction Laboratory in Oise (France) and the Mobile Seismic Array (the Netherlands) which is fully mobile and can be deployed anywhere in the world. 

TA will be also offered for software and workflows belonging to the EPOS-ERIC and the ChEESE Centre of Excellence for Exascale in Solid Earth. These are grounded on simulation of seismic waves and rupture dynamics in complex media, tsunamis, subaerial and submarine landslides. HPC-based Probabilistic Tsunami, Seismic and Volcanic Hazard workflows are offered to assess hazard at high-resolution with extensive uncertainty exploration. Support and collaboration will be offered to the awardees to facilitate the access and usage of HPC resources for tackling geoscience problems. 

Geo-INQUIRE will grant TA to researchers to develop their own lab-based or numerical experiments with the aim of advancing scientific knowledge of Earth processes while fostering cross-disciplinary research across Europe. The data and products generated during the TAs will be made available to the scientific community via the project’s strict adherence to FAIR principles. 

To be granted, researchers submit a proposal to the TA calls that will be issued three times during the project life. The first call was launched on the 9th January. Calls will be advertised on the Geo-INQUIRE website https://www.geo-inquire.eu/ and through the existing community channels.

How to cite: Murphy, S., Festa, G., Lorito, S., Röhling, V., Cotton, F., Strollo, A., Urvois, M., Babeyko, A., Bailo, D., Michalek, J., Lange, O., Quinteros, J., Majdanski, M., Christadler, I., Prestes, M., and Weege, S.: EU-financed transnational access in Geo-INQUIRE: an opportunity for researchers to develop leading-edge science at selected test-beds and research facilities across Europe., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10742, https://doi.org/10.5194/egusphere-egu24-10742, 2024.

X4.186
|
EGU24-11000
Kety Giuliacci, Daniele Bailo, Jan Michalek, Rossana Paciello, Valerio Vinciarelli, Claudio Goffi, Angelo Strollo, Fabrice Cotton, Harald Nedrebø, Sven Peter Näsholm, Quentin Brissaud, Tina Kaschwich, Enoc Martinez, Aljaz Maslo, Volker Röhling, Olivier Frezot, Javier Quinteros, Kuvvet Atakan, and Wolfgang zu Castell

In the last decade, the scientific community has witnessed growing emphasis on data integration. The primary objective is to harness multidisciplinary data and resources to drive novel methodological approaches and scientific breakthroughs. Among the projects that have emerged in response to this trend is the Geosphere INfrastructures for QUestions into IntegratedREsearch (Geo-INQUIRE, https://www.geo-inquire.eu/).

Geo-INQUIRE was launched in October 2022 and comprises a unique consortium of 51 partners, including prominent national research institutes, universities, national geological surveys, and European consortia. Geo-INQUIRE is dedicated to surmounting cross-domain challenges, particularly those pertaining to land-sea-atmosphere environments. To accomplish this mission, Geo-INQUIRE is committed to consolidating the resources and capabilities of key research infrastructures (RIs) specializing in geosphere observations. These RIs include EPOS, EMSO, ARISE, ECCSEL, and ChEESE.

By providing access to its expanded collection of data, products, and services, Geo-INQUIRE empowers the upcoming generation of scientists to conduct cutting-edge research that addresses complex societal challenges from a multidisciplinary viewpoint. This encourages the utilization of these resources to foster curiosity-driven research endeavors.

To harmonize and prepare the data produced by these different RIs for integration, substantial efforts have been undertaken, which required cataloging all installations provided by the data providers, their analysis, and assessment concerning the maturity level required for FAIR (Findable, Accessible, Interoperabile, and Reusable) data integration. In addition, dedicated seminars focused on data integration were carried out to boost the FAIR data provision process. Technical activities have been carried out to achieve cross-RI integration. In this contribution, we demonstrate and exemplify one such integration: between EMSO (https://emso.eu/) and EPOS (https://www.epos-eu.org/). This has been achieved on multiple fronts, including metadata and services.

The successful integration of metadata and services was made possible by adopting the EPOS-DCAT Application Profile (https://epos-eu.github.io/EPOS-DCAT-AP/v3/), allowing an intelligent system like the EPOS platform (https://www.ics-c.epos-eu.org/) to access the EMSO services seamlessly. Work is currently underway to develop software that will enable the visualization of heterogeneous time series data from EMSO within the integrated framework, a crucial step to achieve full data integration.

How to cite: Giuliacci, K., Bailo, D., Michalek, J., Paciello, R., Vinciarelli, V., Goffi, C., Strollo, A., Cotton, F., Nedrebø, H., Näsholm, S. P., Brissaud, Q., Kaschwich, T., Martinez, E., Maslo, A., Röhling, V., Frezot, O., Quinteros, J., Atakan, K., and zu Castell, W.: Multidisciplinary integration of FAIR Research Infrastructures in the Geo-INQUIRE initiative: the EPOS – EMSO case, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11000, https://doi.org/10.5194/egusphere-egu24-11000, 2024.

X4.187
|
EGU24-11901
Graeme Nott and Dave Sproson

The use of airborne cloud imaging probes has resulted in decades of in situ particle-by-particle data taken across the gamut of pristine and anthropogenically-modified cloud types around the globe. Image data from such probes is recorded in proprietary and instrument- or system-specific formats. Binary formats have evolved to minimise the stress on, now possibly outdated, hardware and communication systems that must operate in the difficult aircraft environment. This means that there is a significant knowledge and technical barrier to new users, particularly for those that are not from fields that have traditionally used such cloud data. Processed image data is generally available, however this precludes the application of more advanced or specialised processing of the raw data. For example, historical cloud campaigns of the 1970s and 80s used imaging probes for cloud microphysical measurements at a time when satellite measurements of those regions were sparse or nonexistent. Fields such as atmospheric processes modelling, climate modelling, and remote sensing may well benefit by being able to ingest raw cloud particle data into their processing streams to use in new analyses and to address issues from a perspective not normally used by those in the cloud measurement community.

The Single Particle Image Format (SPIF) data standard has been designed to store decoded raw binary data in netCDF4 with a standardised vocabulary in accordance with FAIR Guiding Principles. This improves access to this data for users from a wide range of fields and facilitates the sharing, refinement, and standardisation of data processing routines. An example is the National Research Council of Canada (NRC) Single Particle Image Format (SPIF) conversion utility which converts binary data into SPIF files. In a similar fashion to  the Climate and Forecast (CF) Conventions, SPIF defines a minimum vocabulary (groups, variables, and attributes) that must be included for compliance while also allowing extra, non-conflicting data to be included. 

The ability to easily check files for compliance to a data standard or convention is an important component of building a sustainable and community supported data standard. We have developed a Python package called vocal as a tool for managing netCDF data product standard vocabularies and associated data product specifications. Vocal projects define standards for netCDF data, and consist of model definitions and associated validators. Vocal then provides a mapping from netCDF data to these models with the Python package pydantic being used for compliance checking of files against the standard definition. 

We will present the vocal package and the SPIF data standard to illustrate its use in building standard compliant files and compliance-checking of SPIF netCDF files.

How to cite: Nott, G. and Sproson, D.: An Open Data Standard for Cloud Particle Images and Reference Software to Produce and Validate Compliant Files, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11901, https://doi.org/10.5194/egusphere-egu24-11901, 2024.

X4.188
|
EGU24-13654
|
ECS
Misha Schwartz, Deepak Chandan, and Steve Easterbrook

Climate researchers have access to astronomical amounts of data; but finding that data and downloading it so that it can be useful for research can be burdensome and expensive.

The team at Data Analytics for Canadian Climate Services (DACCS) is solving that problem by creating a new system for conducting climate research and providing the software to support it. The system works by providing researchers the tools to analyze the data where it’s hosted, eliminating the need to download the data at all.

In order to accomplish this, the DACCS team has developed a software stack that includes the following services:

- data hosting
- data serving (using OPeNDAP protocols)
- data search and cataloging
- interactive computational environments preloaded with climate analysis tools
- remote analysis tools (WPS and OGCAPI features)

Partner organizations can deploy this software stack and choose to host any data that they wish. This data then becomes available to every other participating organization, allowing them seamless access each others data without having to move it for analysis.

This system will allow researchers to more easily:

- discover available data hosted all over the world
- develop analysis workflows that can be run anywhere
- share their work with collaborators without having to directly share data

The DACCS team is currently participating in the Open Science Persistent Demonstrator (OSPD) initiative and we hope that this software will contribute to the ecosystem of earth science software platforms available today.

How to cite: Schwartz, M., Chandan, D., and Easterbrook, S.: Federated Climate Research Software: improving data and workflow management for climate researchers, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13654, https://doi.org/10.5194/egusphere-egu24-13654, 2024.

X4.189
|
EGU24-12962
Deepak Chandan, Misha Schwartz, and Steve Easterbrook

Advances in remote sensing and computing infrastructure, and demands of modern climate research are driving the production of new climate datasets at a breathtaking pace. It is increasingly felt by researchers, that the growing volume of climate datasets is challenging to store, analyze or generally "shepherd" through their analysis pipelines. Quite often, the ability to do this is limited to those with access to government or institutional facilities in wealthier nations, raising important questions around equitable access to climate data.

The Data Analytics for Canadian Climate Services (DACCS) project has built a cloud based network of federated nodes, called Marble, that allows anyone seeking to extract insights from the large volumes of climate data to undertake their study without concerning themselves with the logistics of acquiring, cleaning and storing data. The aspiration for building this network is to provide a low-barrier entry not only to those working in core climate change research, but also to those involved in climate mitigation, resilience and adaptation work and to policy makers, non-profits, educators and students. Marble is one of the platforms selected to contribute to the 'Open Science Platform' component of the OGC’s OSPD initiative.

The user-facing aspect of the platform is comprised of two components: (i) the Jupyter compute environment and (ii) the data server and catalogue. Here, we focus on the latter and present details of the infrastructure, developed on top of proven open-source software and standards (e.g. STAC), that allows for discovery and access of climate datasets stored anywhere on the network by anyone on the network. We will also discuss the publication capability of the platform that allows a user to host their own data on the network and make it quickly available to others.

How to cite: Chandan, D., Schwartz, M., and Easterbrook, S.: The Marble climate informatics platform: data discovery and data access, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12962, https://doi.org/10.5194/egusphere-egu24-12962, 2024.

X4.190
|
EGU24-15898
Tamer Abu-Alam, Katie A. Smart, Per Pippin Aspaas, Leif Longva, Noortje Haugstvedt, and Karl Magnus Nilsen

In the realm of environmental and climate science, addressing the multifaceted challenges our planet faces necessitates a comprehensive approach. Holistic solutions are crucially dependent on the integration and interoperability of data. The polar regions, especially the Arctic, are particularly vulnerable to climate changes, experiencing a rate of temperature increase that is four times faster than the global average [1]. Accelerated polar warming is frequently marked by sea ice loss, but also includes shrinking habitats for polar biospheres that in turn drastically affect Arctic peoples. Though enhanced at the poles, the effects of warming are wide-ranging across the oceans and continents of our planet, affecting weather patterns, ecosystems and human activities. Polar research is thus invaluable for researchers and policymakers and should be widely and freely available. However, In 2019 a significant findability gap was discovered for open access polar records, indicating the need for a cross-disciplinary research service to provide efficient and seamless access to open polar research [2].  

 The Open Polar database [3] was launched in cooperation between the University Library at UiT The Arctic University of Norway and the Norwegian Polar Institute in 2021. Open Polar promotes Findable and Accessible polar research, such that researchers, policymakers, and society have equal and unfettered access to polar region publications and data. Open Polar harvests metadata from over 4600 open access providers, filters for polar research using over 11000 keywords, and enriches the record result by defining geolocations and applying correct DOIs, before finally building the Open Polar database that is searchable by standard text or geolocation. Currently, the database includes nearly 2.5 million open access records, consisting of approximately 75% publications and 25% datasets. Nearly 2 years after its launch, Open Polar maintains a constant robust engagement, and we aim to improve our usage by incorporating new sources, reducing redundancies and considering integration with data archiving and open education services.  

 [1] Rantanen, M., Karpechko, A.Y., Lipponen, A. et al. (2022). The Arctic has warmed nearly four times faster than the globe since 1979. Commun Earth Environ 3, 168. https://doi.org/10.1038/s43247-022-00498-3 

 [2] Abu-Alam, T. S. (2019). Open Arctic Research Index: Final report and recommendations. Septentrio Reports, (3). https://doi.org/10.7557/7.4682 

 [3] https://openpolar.no/ 

How to cite: Abu-Alam, T., Smart, K. A., Aspaas, P. P., Longva, L., Haugstvedt, N., and Nilsen, K. M.: Open Polar: A Comprehensive Database for Advancing Arctic and Antarctic Research, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15898, https://doi.org/10.5194/egusphere-egu24-15898, 2024.

X4.191
|
EGU24-10536
Salsabyl Benlalam, Benoit Derode, Fabien Engels, Marc Grunberg, and Jean Schmittbuhl

The Data Center for Deep Geotermal Energy (CDGP-Centre de Données de Géothermie Profonde, https://cdgp.u-strasbg.fr/) was launched in 2016 and managed by Interdisciplinary Thematic Institute for Geosciences for the energy system Transition (ITI GeoT, https://geot.unistra.fr/), with the purpose of archiving, preserving and distributing deep geothermal data in the Alsace region (France) for the scientific community and R&D activities. The CDGP is furthermore an internal node of EPOS TCS Anthropogenic Hazards (https://www.epos-eu.org/tcs/anthropogenic-hazards), the data provided concerning geothermal sites in Alsace are therefore also available on the EPISODES platform (https://episodesplatform.eu/), which enables users to process and analyze the data they download. The CDGP collects high-quality data from different phases of deep geothermal projects, especially from exploration and development phases. The aim of this service is to provide downloadable multi-disciplinary data, ranging from industrial hydraulic information to seismic records and catalogs, through geological logs and fault maps for example. The data are thoroughly filtered, controlled and validated by analysts, and are grouped into “episodes”, referring to a set of relevant geophysical data correlated over time, and establishing links between anthropogenic seismicity and an industrial activity.

As part of the European Geo-INQUIRE project (GA n. 101058518, https://www.geo-inquire.eu/), we are now expanding the types of data that we distribute. The raw data (RINEX) from GNSS stations that are monitoring the surface deformation around geothermal site are now available on the website. In a next step, we will add complementary information and metadata to our provided database (e.g. precise position/velocity/strain) thanks to our collaboration with EPOS TCS GNSS. We are currently in the process of developing strategies with EPOS TCS GIM (Geological Information and Modeling)  to provide geological maps and borehole data for the “episodes” sites. The aim is to use the TCS GIM services currently under development and benefit of the synergy from the various leading projects.

Specific procedures have also been implemented since the beginning of the project to respect international requirements for data management. FAIR recommendations, for example, are followed to distribute data that are Findable, Accessible, Interoperable, and Reusable. 

How to cite: Benlalam, S., Derode, B., Engels, F., Grunberg, M., and Schmittbuhl, J.: CDGP data center, new data for interdisciplinarity research , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10536, https://doi.org/10.5194/egusphere-egu24-10536, 2024.

X4.192
|
EGU24-19603
Claudio Dema, Francesco Izzi, Lucia Mona, Vito Salvia, Carmela Cornacchia, Ermann Ripepi, Michele Volini, and Gelsomina Pappalardo

The Italian Integrated Environmental Research Infrastructures System (ITINERIS) Project coordinates a network of national nodes from 22 RIs.

ITINERIS has been designed looking at synergy with the European RI framework, and it will support the participation of Italian scientists in pan-European initiatives (like ENVRI-Hub NEXT and EOSC). ITINERIS will have significant impact on national environmental research, providing scientific support to the design of actionable environmental strategies.

To this end, ITINERIS will build the Italian HUB of Research Infrastructures in the environmental scientific domain providing a single and unified access point to facilities, FAIR data and related services available within the Italian RIs network through an easy-to-navigate interface.

Through the ITINERIS HUB all users will have access to data and services, with a complete catalogue of data and services and a proper access management system. ITINERIS will not duplicate existing data provided by European RIs, but will make them discoverable in a unique place together with other National RI resources. Additionally, ITINERIS HUB will make accessible data resulting from specific ITINERIS project activities like campaign data and novel data products.

ITINERIS will also offer a system of Virtual Research Environments (VRE), that will provide new services to address scientifically and societally relevant issues starting from an ensemble of cross-disciplinary actions on the data, information and knowledge generated by the different RIs in the different environmental subdomains.

State-of-the-art applications and custom-made tools will be integrated in the HUB to respond to the needs of: collecting and preparing information on products, resources and services to be advertised in a way that users can easily discover and access them and the RIs; facilitating the management of user access requests through the automated workflows and specific features that are peculiar to the online submission and management systems.

The main concept includes a GeoServer with the possibility of discovery, visualization and plotting RI data available. To be more precise, a scalable infrastructure for the provision of mapping services compliant with the standards of the Open Geospatial Consortium (OGC) WMS (Web Map Service), WFS (Web Feature Service), WCS (Web Coverage Service) will be implemented. A Metadata Service will be responsible for providing OGC CSW services. It will therefore offer a data discovery service through metadata consultation, corresponding to the most common metadata profiles. A stack of support services (REST/SOAP interfaces) for queries on geographic databases will be provided.

The ITINERIS HUB is candidate to become the integrated system of the Italian Environmental RIs to be connected to European initiatives like ENVRI-Hub NEXT and EOSC, fostering the ability to address current and expected environmental challenges at National level and beyond.

Acknowledgement

IR0000032 – ITINERIS, Italian Integrated Environmental Research Infrastructures System (D.D. n. 130/2022 - CUP B53C22002150006) Funded by EU - Next Generation EU PNRR- Mission 4 “Education and Research” - Component 2: “From research to business” - Investment 3.1: “Fund for the realisation of an integrated system of research and innovation infrastructures”

How to cite: Dema, C., Izzi, F., Mona, L., Salvia, V., Cornacchia, C., Ripepi, E., Volini, M., and Pappalardo, G.: ITINERIS HUB: the unified access point to Italian environmental facilities, FAIR data and related services, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19603, https://doi.org/10.5194/egusphere-egu24-19603, 2024.

X4.193
|
EGU24-20005
Simo Tukiainen, Tuomas Siipola, Niko Leskinen, and Ewan O'Connor

Clouds strongly regulate the Earth’s water cycle and the planetary radiation balance. Clouds are one of the largest contributors to the overall uncertainty in climate feedbacks, propagating into global temperature projections (Arias et al., 2021). Cloudnet data repository provides long-term datasets of cloud property profiles with a high temporal and vertical resolution, derived from synergetic ground-based measurements and numerical weather prediction model data.

These datasets can be used, for example, to validate satellite-based products and to improve the accuracy of climate and weather forecast models. Cloudnet is part of the Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) which is now in the implementation phase and plans to be fully operational in 2025 (Häme et al., 2018).

Cloudnet receives data regularly from around 20 stationary observational facilities. Each facility is equipped with instruments that meet the requirements of the ACTRIS Centre for Cloud Remote Sensing (CCRES). These instruments include Doppler cloud radars, Doppler lidars, ceilometers, microwave radiometers, disdrometers, and weather stations. We also host and process data from mobile and campaign platforms.

Cloudnet processes raw instrument data into cloud property products such as target classification of the scatterers, liquid and ice water content, and drizzle drop size distribution (Illingworth et al., 2007) using the open-source Python package CloudnetPy (Tukiainen et al., 2020). Processed data products are provided in near-real time, typically within one hour from the measurement. In the future, Cloudnet will also provide wind and boundary layer height products derived from Doppler lidar data.

All the raw and processed data are freely available according to the FAIR principles (Wilkinson et al., 2016) via cloudnet.fmi.fi. Furthermore, our software are freely and openly available from https://github.com/actris-cloudnet/.

ACKNOWLEDGEMENTS

We thank the Academy of Finland Flagship (grant no. 337552), the European Union’s Horizon 2020 research and innovation programme (grant no. 654109), and ACTRIS (project no. 739530, grant no. 871115) for funding this project.

REFERENCES

Arias et.al. (2021) Technical Summary. Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA)

Häme et.al. (2018). ACTRIS stakeholder handbook 2018. (Painotalo Trinket Oy).

Illingworth et.al. (2007). Cloudnet: Continuous Evaluation of Cloud Profiles in Seven Operational Models Using Ground-Based Observations. Bulletin of the American Meteorological Society, 6, 88.

Tukiainen, S., O’Connor, E., and Korpinen, A. (2020). CloudnetPy: A Python package for processing cloud remote sensing data. Journal of Open Source Software, 5(53), 2123.

Wilkinson et.al. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), 160018.

How to cite: Tukiainen, S., Siipola, T., Leskinen, N., and O'Connor, E.: Cloudnet – an ACTRIS data repository for cloud remote sensing observations, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20005, https://doi.org/10.5194/egusphere-egu24-20005, 2024.

X4.194
|
EGU24-6810
Chris Hagerbaumer, Colleen Marciel Fontelera Rosales, Russ Biggs, and Gabe Fosse

OpenAQ is the largest open-source, open-access repository of air quality data in the world, integrating and hosting over 50 billion measurements from air monitors and sensors at more than 59,000 ground-level locations across 153 countries. The OpenAQ platform supports data on a variety of pollutants in different temporal frequencies. The platform is a one-stop solution for accessing air quality data in a consistent and harmonized format, thereby facilitating findability, accessibility, interoperability and reusability. OpenAQ utilizes modern cloud computing architectures and open-source data tools to maintain a highly scalable data pipeline, which can be resource- and computationally intensive, thus requiring thoughtful and efficient data management and engineering practices. Being an open-source platform that is grounded in community, OpenAQ strives to be transparent, responsible, user-focused, sustainable and technologically-driven.

OpenAQ supports innovation and collaboration in the air quality space by: 

  • Ingesting and sharing data on an open, low-bandwidth platform to ensure data is broadly accessible
  • Providing tools  to help  interpret the data and create visualizations for users with varied technical skills
  • Providing a user guide and trainings on how to use the OpenAQ platform for community-level pilot purposes and beyond
  • Catalyzing specific analyses through intentional outreach to a broad community of data stakeholders

OpenAQ has been widely used for research, informing nearly 300 scientific and data-oriented publications/proceedings. OpenAQ trainings and workshops around the world have resulted in community statements demanding increased coverage and frequency of air quality monitoring, the donation of air quality monitoring equipment to local communities, and adoption of APIs to make open-source city data available. As one example, our work with the Clean Air Catalyst supports pilots to clean the air in Jakarta (Indonesia), Indore (India) and Nairobi (Kenya). As another example, our Community Ambassador program trains emerging air quality leaders in low- and middle-income countries to utilize open data to spur community action to fight air pollution. 

Our poster describes how OpenAQ ingests and harmonizes heterogeneous air quality data at scale and how we conduct outreach to increase impactful usage of the hosted data.

How to cite: Hagerbaumer, C., Rosales, C. M. F., Biggs, R., and Fosse, G.: OpenAQ: Harmonizing Billions of Air Quality Measurements into an Open and FAIR Database, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6810, https://doi.org/10.5194/egusphere-egu24-6810, 2024.

X4.195
|
EGU24-12891
Enrico Serpelloni, Lucia Zaccarelli, Licia Faenza, Antonio Caracausi, Carlos Almagro Vidal, Francesco Pintori, Eugenio Mandler, and Lauro Chiaraluce

Earthquakes, intricate natural events spanning multiple spatio-temporal scales, necessitate a comprehensive understanding of the physical and chemical processes driving a broad spectrum of fault slip modes. To achieve this, the acquisition of multidisciplinary and dense datasets is imperative. Near Fault Observatories (NFOs) play a pivotal role by offering spatially and temporally dense, high-precision near-fault data, fostering the generation of novel observations and innovative scientific insights. However, the integration and interpretation of diverse datasets from various disciplines (geophysics, geochemistry, hydrology, etc.) present challenges. These datasets often consist of time-series depicting the temporal evolution of different parameters, sampling diverse temporal and spatial scales, depths, and the distinct or cumulative effects of various multiscale processes. In this presentation, we share outcomes from the INGV multidisciplinary project MUSE: M​ultiparametric and m​U​ltiscale ​S​tudy of ​Earthquake preparatory phase in the central and northern Apennines. Our emphasis lies in showcasing the approaches developed to analyze, integrate, and extract new knowledge from the EPOS Near Fault Observatory TABOO. This state-of-the-art observatory, managed by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), boasts a dense network with an average inter-distance of approximately 5 km between multidisciplinary sensors. These sensors, deployed at the surface and within shallow boreholes, include seismometrical, geodetic, geochemical, hydrological, and strain stations. The project's core objective is to unravel the interconnections between different observables and explore the causal relationships among them. We will present the datasets, the methods employed, and discuss the significance of considering the interaction between fluid and solid geophysical processes in comprehending earthquake phenomena. Additionally, we will articulate the potential innovative scientific products that can arise from this research, contributing to a deeper understanding of earthquake processes.

How to cite: Serpelloni, E., Zaccarelli, L., Faenza, L., Caracausi, A., Almagro Vidal, C., Pintori, F., Mandler, E., and Chiaraluce, L.: Multidisciplinary analysis of near fault observatory data: example from the Alto Tiberina fault (Northern Apennines, Italy), EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12891, https://doi.org/10.5194/egusphere-egu24-12891, 2024.

X4.196
|
EGU24-21562
Geochemistry within the Near Fault Observatory: the Alto Tiberina case study (Italy)
(withdrawn)
Antonio Caracausi, Dario Buttitta, Marco Camarda, Lauro Chiaraluce, Sofia De Gregorio, Rocco Favara, and Antonio Fabio Pisciotta
X4.197
|
EGU24-16403
Marianna Cangemi, Carlo Doglioni, Paolo Madonia, Mario Mattia, and Giulio Selvaggi

The Strait of Messina, separating Sicily from continental Italy, is an area prone to different, high-grade, geological hazards. Here, many of the most devastating earthquakes of Italy have occurred, including the M 7.1 Messina-Reggio Calabria earthquake of 28 December 1908, the most intense event recorded in southern Europe in the instrumental epoch. The strait, on both sides, is surmounted by a mountain chain, directly degrading on a narrow, densely urbanized, coastal belt. Its steep slopes, composed of geological terrains with poor geotechnical characteristics, are affected by diffuse mass movements, as the 1 October 2009 landslide, triggered by an intense rainfall, which destroyed several little villages immediately southward of Messina, causing 37 causalities. The Peloro Cape area, the north-eastern termination of Sicily, hosts a lacunar environmental system, protected by the Ramsar Convention but also of economic interest, because exploited for shellfish livestock; these lagoons are extremely sensible to changes in sea level and temperature, which can pose serious threats to its ecological stability. This complex scenario exhibits a further criticality: the planned bridge for linking Sicily and continental Italy that, if realized, will be the longest single span bridge of the world.

This complex natural-built environment needs a multidisciplinary monitoring network for mitigating the multiple risks that affect both its natural and anthropic components. Its implementation is the aim of the Work Package 5 “NEMESI” of the Italian PNRR project MEET, the post-Covid 19 pandemic national plan for recovery and resilience, financed in the framework of the European Next Generation EU initiative.

Part of this multidisciplinary monitoring system will consist of a hydrogeochemical network, composed of 11 stations measuring, acquiring in a local logger and transmitting to the INGV data centre, data of temperature, level, electric conductivity, turbidity and dissolved O2 and CO2.

The main challenge in the implementation of the Strait of Messina hydrogeochemical network is the correct selection of the monitoring sites, which will be based in underground and surface water bodies, whose physic-chemical characteristics should contemporary work as indicators of very different processes: changes in electrical conductivity due to sea level rise, variations of temperature and piezometric levels induced by permeability changes driven by seismic and aseismic deformations, changes in oxygenation, turbidity and dissolved CO2, which can be controlled by both eutrophication and mixing with deep volatiles, whose flux is driven by neotectonic activity.

For accomplishing this mission, and producing open access data of interest for the different stakeholders, spanning from the scientific community to the shellfish food industry, it will be mandatory a real multidisciplinary approach, embracing geological, geophysical, geodetic, geochemical, eco-hydrological and socio-economic data.

 

How to cite: Cangemi, M., Doglioni, C., Madonia, P., Mattia, M., and Selvaggi, G.: Implementation of a hydrogeochemical monitoring network following a multi-risk vision: the Strait of Messina (Italy) case., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16403, https://doi.org/10.5194/egusphere-egu24-16403, 2024.

X4.198
|
EGU24-20704
Armita Davarpanah, Fatemeh Shafiei, and Na'Taki Jelks

This study employs spatial and semantic modeling to formally specify the intersection of environmental concerns and social justice, focusing on the unequal impact of environmental hazards on the economically disadvantaged people living in the Proctor Creek watershed within the Atlanta Metropolitan area. Our 'Public Health-Urban-Socio-economic-Environmental Justice' (PUSH-EJ) ontology formally integrates environmental justice indices and all concepts from the EPA’s EJScreen, such as Environmental Indicator, Demographic Indicator, EJ Index, Particulate Matter, Risk, and Proximity to hazardous sites. PUSH-EJ also formalizes the National Air Toxics Assessment (NATA)’s Air Toxics Cancer Risk, Air Toxics Respiratory Hazard Index, and Diesel Particulate Matter. The modeled environmental indicators include levels of ozone, particulate matter 2.5 (micrometer or smaller-sized particles), and lead paint (for houses built before 1960) in the air, count of underground leaking storage tanks, and count and proximity to wastewater discharge areas. The ontology also models proximity of housing units to waste and hazardous chemical facilities or sites related to National Priorities List (NPL) Superfund Program, Risk Management Plan (RMP) sites, Treatment, Storage, and Disposal Facilities (TSDFs), and Traffic volume. Environmental, demographic, and socioeconomic indicators are mapped to the objectives of UN SDGs 1, 3, 4, 5, 8, and 10, bridging the gap between environmental justice, public health, urban dynamics, and socio-economic aspects. Our analysis of Proctor Creek's socioeconomic indicators reveals a combined Demographic Index of 73%, driven by Low Income (61%) and People of Color (90%). These findings indicate that Proctor Creek exhibits the lowest scores across all categories when compared to other regions in Georgia, EPA Region 4, and the nation. Our results call for minimizing contamination in the Proctor Creek area and uplifting socioeconomic conditions by the authorities responsible for the watershed. Our spatial analysis highlights urgent priorities in the Proctor Creek area, for the management of air toxic sources, emissions, and addressing proximity issues linked to environmental pollutants from hazardous waste sites, lead paint, and traffic.

How to cite: Davarpanah, A., Shafiei, F., and Jelks, N.: Semantic and spatial Proximity Modeling of Equitable Sustainability in Proctor Creek, Atlanta: Merging Environmental Justice and Sustainable Development, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20704, https://doi.org/10.5194/egusphere-egu24-20704, 2024.

Posters virtual: Wed, 17 Apr, 14:00–15:45 | vHall X4

Display time: Wed, 17 Apr 08:30–Wed, 17 Apr 18:00
Chairpersons: Fabio Feriozzi, Marta Gutierrez, Jacco Konijn
vX4.31
|
EGU24-12102
|
Highlight
Nikos Mamassis, Romanos Ioannidis, Christos Daskalakis, Fotis Loukidis-Andreou, Margarita Zakynthinou-Xanthi, Lucas Gicquel, Lucile Samah--Ribeiro, Filio Iliopoulou, G.-Fivos Sargentis, and Kontantinos Moraitis

The fields of information technology and geoinformatics have experienced rapid growth and widespread public adoption, with technologies like crowdsourcing facilitating advances in how the public can communicate with scientific communities and even contribute valuable data.

However, there is still hesitation in actively engaging the public in environmental or landscape related studies. The start contract of availability of crowdsourcing technologies and lack of use thereof is particularly noticeable in university education, where the technological potential of smartphones, widely owned and used by students, remains largely untapped for educational and research purposes. This study is part of a larger exploration of the potential of engaging students in participatory georeferenced landscape assessment, aiming to advance relevant environmental research and also make education in landscape and architecture more interactive and synergistic.

Starting from an initial theoretical investigation our work proceeded to the examination of the developed ideas in practice. A dedicated crowdsourcing mobile application was developed and tested as a pilot study with a small number of students, before proceeding to the inclusion of large numbers of students which is the end goal of the ARCHIMAP crowdsourcing project. This initial “test” targeted both potential practical challenges as well as software and generated-data related challenges. To this aim the Lycabettus hill and surrounding neighborhoods were investigated as a case study. Students were given the application and their interactions with it were recorded in detail, tracking their movement and location, recording their landscape and architecture assessments and evaluating the technical performance of the application.

Other than the observation of technical and functional challenges the study also initiated a brief investigation of the potential utility of the results. This was carried out by implementing a conventional method of analysis of landscapes, the so called ULQI (Urban Landscape Quality Index) and investigating its correlation and potential synergy with the results submitted by the students through the novel crowdsourcing app for georeferenced landscape assessment.

The results demonstrated that the developed app was both functional and useful and could therefore be shared to more students of NTUA, with expected benefits both for the educational processes but also for the scientific research of the institution on landscape quality.

How to cite: Mamassis, N., Ioannidis, R., Daskalakis, C., Loukidis-Andreou, F., Zakynthinou-Xanthi, M., Gicquel, L., Samah--Ribeiro, L., Iliopoulou, F., Sargentis, G.-F., and Moraitis, K.: A preliminary analysis of a crowdsourcing platform for participatory assessment of urban landscapes by university students using GIS, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12102, https://doi.org/10.5194/egusphere-egu24-12102, 2024.

vX4.32
|
EGU24-11659
Liping Di, Eugene Yu, Liying Guo, Patrick Quinn, and Joshua Lieberman

Geospatial data are data with location information. Geospatial data are very diverse and widely used in various socioeconomic applications and decision makings. Typically, geospatial data obtained from data providers needs to go through a long chain of pre-processes and quality measures before the data can be analyzed for a specific application. For a specific type of geospatial data, many of the pre-processes and quality measures are common to different data users regardless the data applications. It is possible to pre-apply those common pre-processes and quality measures to the geospatial data so that the repetitive preprocesses can be avoided, the pre-process chain at user side can be significantly shorten, and the data is more ready for analysis. The geospatial data, which has been pre-applied with a set of pre-processes to meet certain quality specifications and be ready for analysis in applications, are called geospatial analysis ready data (ARD). In the satellite remote sensing domain, the Committee on Earth Observation Satellites (CEOS) has defined the CEOS Analysis Ready Data (CEOS-ARD) as satellite remote sensing data that have been processed to a minimum set of requirements and organized into a form that allows immediate analysis with a minimum of additional user effort and interoperability both through time and with other datasets. CEOS has set a number of ARD product family specifications (PFS) and encouraged its member space agencies to produce CEOS ARD PFS compliant products. However, CEOS ARD PFS are limited to satellite remote sensing data and are not the recognized international standards, which prevents them from being widely accepted and adopted by the broad geospatial community. Other geospatial communities, such as ARD.Zone, are also developing their ARD concepts.  Formal ARD standardization through authoritative international standard bodies is necessary to achieve broad uptake, particularly by the commercial sector, promote widely acceptance of the standardized concept, and help avoid the divergence that can be caused by various groups working towards different interpretations of the concept. Therefore, a joint effort between ISO TC 211 and the Open Geospatial Committee (OGC) was officially formed in May 2023 to set international ARD standards through forming the broadest consensus within the geospatial community. ISO has designated the geospatial ARD standards as ISO 19176, and the first one to be developed is ISO 19176-1: Geographic information —Analysis Ready Data — Part 1: Framework and Fundamentals. In addition, OGC, through its testbed and pilot initiatives, has been evaluating the applicability, advantage, and gaps of using existing geospatial ARD products from various sources in different applications. The findings and lessons learned from the evaluation are reinforcing the development of ISO 19176.  This presentation will report the progress so far on the development of ISO 19176-1 and recapture the findings from ARD activities in OGC Testbed 19. It will discuss the joint ISO/OGC ARD standard development process, the ISO 19176-1 development timeline, the ARD framework and UML models defined in ISO 19176-1, the findings from OGC Testbed 19 on ARDs, and the future workplan.

How to cite: Di, L., Yu, E., Guo, L., Quinn, P., and Lieberman, J.: Standardization of geospatial analysis ready data via OGC and ISO , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11659, https://doi.org/10.5194/egusphere-egu24-11659, 2024.