Modern challenges of climate change, disaster management, public health and safety, resources management, and logistics can only be effectively addressed through big data analytics. Advances in technology are generating vast amounts of geospatial data on local and global scales. The integration of artificial intelligence (AI) and machine learning (ML) has become crucial in analysing these datasets, leading to the creation of various maps and models within the various fields of geosciences. Recent studies, however, highlight significant challenges when applying ML and AI to spatial and spatio-temporal data along the entire modelling pipeline, including reliable accuracy assessment, model interpretation, transferability, and uncertainty assessment. This gap has been recognised and led to the development of new spatio-temporally aware strategies and methods in response to the promise of improving spatio-temporal predictions, the treatment of the cascade of uncertainties, decision making and facilitating communication.
This session discusses challenges and advances in spatial and spatio-temporal machine learning methods and the software and infrastructures to support them.
Recent breakthroughs in machine learning, notably deep learning, that facilitate massive amounts of data with data-driven AI models have led to an unprecedented potential for large-scale environmental monitoring through remote sensing. Despite the success of existing deep learning-based approaches in remote sensing for many applications, their shortcomings in jointly leveraging various aspects of Earth observation data prevent fully exploiting the potential of remote sensing for the environment. Namely, integrating multiple data modalities and remote sensing sensors, leveraging deep learning methods over multi-spatial/spectral resolution Earth observation data, and modeling space and temporality together offer remarkable opportunities for a comprehensive and accurate understanding of the environment. Throughout this session, we aim to gather the community to delve into the latest scientific advances that leverage these multi-dimensional approaches to tackle pressing environmental challenges.
Earth, its weather and climate constitute a complex system whose monitoring and forecasting have witnessed remarkable progress in recent years. In particular, enhanced spaceborne observations with the integration of Machine/Deep Learning (ML/DL) techniques are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate. In parallel, the concept of Digital Twins (DTs) of the Earth has emerged as a revolutionary approach to address climate resilience, disaster risk management, and sustainable development through highly detailed, high fidelity digital replicas of the Earth system.
Recently, ML/DL techniques have attracted significant attention and increased adoption within the ESOP community due to their ability to enhance our simulation and prediction capabilities of the Earth's complex dynamics. At the same time, DTs serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and better comprehend the intricate interactions between natural phenomena and human activities.
The focus of the session is on exploring new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale physics- or data-driven Earth system models, the integration of real-time multi-disciplinary data, and demonstrations of practical applications of these systems in addressing climate impacts, resilience and sustainability. This session invites experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to better address current challenges and to identify opportunities for future work as well as synergies across domains.
A key emphasis will be placed on the societal implications of these technologies, showcasing how ML-enhanced ESOP and Earth Digital Twins can empower policymakers with tailored insights for optimizing resource management, designing effective adaptation strategies, and building resilience against severe weather and climate challenges.
Geospatial Foundation Models (GeoFMs) have shown great promise in a wide range of applications for Earth Observation (EO) and Earth System Modelling (ESM), as well as for Weather and Climate. With the increasing number of models being published, model inter-comparison is key to identify the best GeoFM for deployment. This session aims to highlight efforts on model development, benchmarking, fine-tuning, and their best practices for utilizing GeoFMs in real-world applications. We invite submissions focused on creating GeoFMs to leverage multi-modal, multi-temporal, and multi-resolution datasets towards sensor-independence. Diverse FMs for EO, ESM, Weather, and Climate can revolutionize data analysis by handling text, imagery, and time-series, enabling insights into natural hazards and climate resilience. Our session will cover advances in data curation, model architecture, scaling, benchmarking, pretraining, fine-tuning, and MLOps for GeoFMs, including use cases and deployment strategies.
The topics of our session revolving around GeoFMs are:
1. Benchmarks & Evaluation: Establish standardized fair evaluation metrics and benchmarks to assess the performance and capabilities of GeoFMs in multi-modal data analysis, ensuring reliability and efficiency.
2. Pre-Training Strategies & Best Practices: Discuss efficient data sampling strategies, proxy tasks, and scalable model training for efficient pre-training of GeoFMs. Guidelines for using existing pre-trained GeoFMs for a diverse set of applications, with focus on how to decide which models are best for certain use cases.
3. Sensor Independence: GeoFMs can process data from various sensors, enabling comprehensive analysis of the Earth's dynamics holistically.
4. Multi-Modal/Temporal: GeoFMs offer novel approaches to multi-modal data analysis and spatio-temporal change detection.
5. Scientific Insights: Highlighting the scientific insights enabled through the creation of GeoFMs, particularly in relation to geo-physical principles and causal relations.
6. Community Involvement & Impact: How to build an open-science community around GeoFMs that is easily accessible to all while keeping an eye on potential societal, environmental, and economic impacts when deploying GeoFMs.
We aim to foster discussions on current applications, challenges, and opportunities of GeoFMs seeking contributions from AI and domain researchers, climate modelers, industry experts, and stakeholders in AI, HPC, and Big Data.
The recent growing number of probes in the heliosphere and future missions in preparation led to the current decade being labelled as "the golden age of heliophysics research". With more viewpoints and data downstreamed to Earth, machine learning (ML) has become a precious tool for planetary and heliospheric research to process the increasing amount of data and help the discovery and modelisation of physical systems. Recent years have also seen the development of novel approaches leveraging complex data representations with highly parameterised machine learning models and combining them with well-defined and understood physical models. These advancements in ML with physical insights or physically informed neural networks inspire new questions about how each field can respectively help develop the other. To better understand this intersection between data-driven learning approaches and physical models in planetary sciences and heliophysics, we seek to bring ML researchers and physical scientists together as part of this session and stimulate the interaction of both fields by presenting state-of-the-art approaches and cross-disciplinary visions of the field.
The "ML for Planetary Sciences and Heliophysics" session aims to provide an inclusive and cutting-edge space for discussions and exchanges at the intersection of machine learning, planetary and heliophysics topics. This space covers (1) the application of machine learning/deep learning to space research, (2) novel datasets and statistical data analysis methods over large data corpora, and (3) new approaches combining learning-based with physics-based to bring an understanding of the new AI-powered science and the resulting advancements in heliophysics research.
Topics of interest include all aspects of ML and heliophysics, including, but not limited to: space weather forecasting, computer vision systems applied to space data, time-series analysis of dynamical systems, new machine learning models and data-assimilation techniques, and physically informed models.
One of the big challenges in Earth system science consists in providing reliable climate predictions on sub-seasonal, seasonal, decadal and longer timescales. The resulting data have the potential to be translated into climate information leading to a better assessment of global and regional climate-related risks.
The main goals of the session is (i) to identify gaps in current climate prediction methods and (ii) to report and evaluate the latest progress in climate forecasting on subseasonal-to-decadal and longer timescales. This will include presentations and discussions of developments in the predictions for the different time horizons from dynamical ensemble and statistical/empirical forecast systems, as well as the aspects required for their application: forecast quality assessment, multi-model combination, bias adjustment, downscaling, exploration of artificial-intelligence methods, etc.
Following the new WCRP strategic plan for 2019-2029, prediction enhancements are solicited from contributions embracing climate forecasting from an Earth system science perspective. This includes the study of coupled processes between atmosphere, land, ocean, and sea-ice components, as well as the impacts of coupling and feedbacks in physical, hydrological, chemical, biological, and human dimensions. Contributions are also sought on initialization methods that optimally use observations from different Earth system components, on assessing and mitigating the impacts of model errors on skill, and on ensemble methods.
We also encourage contributions on the use of climate predictions for climate impact assessment, demonstrations of end-user value for climate risk applications and climate-change adaptation and the development of early warning systems.
A special focus will be put on the use of operational climate predictions (C3S, NMME, S2S), results from the CMIP5-CMIP6 decadal prediction experiments, and climate-prediction research and application projects.
An increasingly important aspect for climate forecast's applications is the use of most appropriate downscaling methods, based on dynamical, statistical, artificial-intelligence approaches or their combination, that are needed to generate time series and fields with an appropriate spatial or temporal resolution. This is extensively considered in the session, which therefore brings together scientists from all geoscientific disciplines working on the prediction and application problems.
In recent years, technologies based on Artificial Intelligence (AI), such as image processing, smart sensors, and intelligent inversion, have garnered significant attention from researchers in the geosciences community. These technologies offer the promise of transitioning geosciences from qualitative to quantitative analysis, unlocking new insights and capabilities previously thought unattainable.
One of the key reasons for the growing popularity of AI in geosciences is its unparalleled ability to efficiently analyze vast datasets within remarkably short timeframes. This capability empowers scientists and researchers to tackle some of the most intricate and challenging issues in fields like Geophysics, Seismology, Hydrology, Planetary Science, Remote Sensing, and Disaster Risk Reduction.
As we stand on the cusp of a new era in geosciences, the integration of artificial intelligence promises to deliver more accurate estimations, efficient predictions, and innovative solutions. By leveraging algorithms and machine learning, AI empowers geoscientists to uncover intricate patterns and relationships within complex data sources, ultimately advancing our understanding of the Earth's dynamic systems. In essence, artificial intelligence has become an indispensable tool in the pursuit of quantitative precision and deeper insights in the fascinating world of geosciences.
For this reason, aim of this session is to explore new advances and approaches of AI in Geosciences.
The complexity of hydrological and Earth systems poses significant challenges to their prediction and understanding capabilities. The advent of machine learning (ML) provides powerful tools for modeling these complex systems. However, realizing their full potential in this field is not just about algorithms and data, but requires a cooperative interaction between domain knowledge and data-driven power. This session aims to explore the frontier of this convergence and how it facilitates a deeper process understanding of various aspects of hydrological processes and their interactions with the atmosphere and biosphere across spatial and temporal scales.
We invite researchers working in the fields of explainable AI, physics-informed ML, hybrid Earth system modeling (ESM), and AI for causal and equation discovery in hydrology and Earth system sciences to share their methodologies, findings, and insights. Submissions are welcome on topics including, but not limited to:
- Explainability and transparency in ML/AI modeling of hydrological and Earth systems;
- Process and knowledge integration in ML/AI models;
- Data assimilation and hybrid ESM approaches;
- Causal learning and inference in ML models;
- Data-driven equation discovery in hydrological and Earth systems;
- Data-driven process understanding in hydrological and Earth systems;
- Challenges, limitations, and solutions related to hybrid models and XAI.
Deep Learning has seen accelerated adoption across Hydrology and the broader Earth Sciences. This session highlights the continued integration of deep learning and its many variants into traditional and emerging hydrology-related workflows. We welcome abstracts related to novel theory development, new methodologies, or practical applications of deep learning in hydrological modeling and process understanding. This might include, but is not limited to, the following:
(1) Development of novel deep learning models or modeling workflows.
(2) Probing, exploring and improving our understanding of the (internal) states/representations of deep learning models to improve models and/or gain system insights.
(3) Understanding the reliability of deep learning, e.g., under non-stationarity and climate change.
(4) Modeling human behavior and impacts on the hydrological cycle.
(5) Deep Learning approaches for extreme event analysis, detection, and mitigation.
(6) Natural Language Processing in support of models and/or modeling workflows.
(7) Applications of Large Language Models and Large Multimodal Models (e.g. ChatGPT, Gemini, etc.) in the context of hydrology.
(8) Uncertainty estimation for and with Deep Learning.
(9) Advances towards foundational models in the context of hydrology and Earth Sciences more generally.
(10) Exploration of different training strategies, such as self-supervised learning, unsupervised learning, and reinforcement learning.
In this short course we will address the increasing role of artificial intelligence (AI) in geoscientific research, guiding participants through the various stages of the research process where AI tools can be effectively implemented, however with responsibility. We will explore freely available AI tools that can be used for data analysis, model development, and research publication. Additionally, the course aims to provoke reflections on the ethical implications of AI use, addressing concerns such as data bias, transparency, and the potential for misuse. Participants will engage in interactive discussions to explore what constitutes responsible and acceptable use of AI in geoscientific research, aiming to establish a set of best practices for integrating AI into scientific workflows.
Model Land is a conceptual place within the boundaries of a model. When we confuse a Model Land for the real world, we can be ignorant to the assumptions, limitations, uncertainties, and biases inherent in our models. These things need to be carefully understood and considered before we use models to inform decisions about the real world and by doing so we can escape from our Model Lands (Thompson, 2019).
However, in order to escape, we need to explore our Model Lands, mapping them and developing a deeper understanding of their rules and boundaries. In this short course we will present a framework inspired by tabletop roleplay games (TTRPGs) that will bring Model Lands to life. Either using your own model or one of our examples you will learn how to build a world that follows its rules, how to investigate what it would be like to exist within that world, and how to share with others what you have learnt.
Please bring along a pen and paper and be prepared to share your Model Lands. We want to encourage creative expression, so if you have a flair for drawing, poetry, games design, or interpretive dance, feel free to bring along the means to share your creations through whatever medium you prefer.
Data assimilation (DA) is widely used in the study of the atmosphere, the ocean, the land surface, hydrological processes, etc. The powerful technique combines prior information from numerical model simulations with observations to provide a better estimate of the state of the system than either the data or the model alone. This short course will introduce participants to the basics of data assimilation, including the theory and its applications to various disciplines of geoscience. An interactive hands-on example of building a data assimilation system based on a simple numerical model will be given. This will prepare participants to build a data assimilation system for their own numerical models at a later stage after the course.
In summary, the short course introduces the following topics:
(1) DA theory, including basic concepts and selected methodologies.
(2) Examples of DA applications in various geoscience fields.
(3) Hands-on exercise in applying data assimilation to an example numerical model using open-source software.
This short course is aimed at people who are interested in data assimilation but do not necessarily have experience in data assimilation, in particular early career scientists (BSc, MSc, PhD students and postdocs) and people who are new to data assimilation.
Over the last decade, a flurry of machine learning methods has led to novel insights throughout geophysics. As wide as the applications are the data types processed, including environmental parameters, GNSS, InSAR, infrasound, and seismic data, but also downstream structured data products such as 3D data cubes, earthquake catalogs, seismic velocity changes. Countless methods have been proposed and successfully applied, ranging from traditional techniques to recent deep learning models. At the same time, we are increasingly seeing the adoption of machine learning techniques in the wider geophysics community, driven by continuously growing data archives, accessible codes, and software. Yet, the landscape of available methods and data types is difficult to navigate, even for experienced researchers.
In this session, we want to bring together machine learning researchers and practitioners throughout the domains of geophysics. We aim to identify common challenges connecting different tasks and data types and formats, and outline best practices for the development and use of machine learning. We also want to discuss how recent trends in machine learning, such as foundation models, the shift to multimodality, or physics informed models may impact geophysical research. We welcome contributions from all fields of geophysics, covering a wide range of data types and machine learning techniques. We also encourage contributions for machine learning adjacent tasks, such as big-data management, data visualization, or software development in the field of machine learning.
Assessing the spatial heterogeneity of environmental variables is a challenging problem in real applications when numerical approaches are needed. This is made more difficult by the complexity of Natural Phenomena, which are characterized by (Chiles and Delfiner, 2012):
- being unknown: their knowledge is often incomplete, derived from limited and sparse samples;
- dimensionality: they can be represented in two- or three-dimensional domains;
- complexity: deterministic interpolators (i.e., Inverse Distance Weighted) may fail in providing exhaustive spatial distribution models, as they do not consider uncertainty;
- uniqueness: invoking a probabilistic approach, they can be assumed as a realization of a random process and described by regionalized variables.
Geostatistics provides optimal solutions to this issue, offering tools to accurately predict values and uncertainty in unknown locations while accounting for the spatial correlation of samples.
The course will address theoretical and practical methods for evaluating data heterogeneity in computational domains, exploiting the interplay between geometry processing, geostatistics, and stochastic approaches. It will be mainly split into 4 parts, as follows:
- Theoretical Overview: Introduction to Random Function Theory and Measures of Spatial Variability
- Modeling Spatial Dependence: An automatic solution to detect both isotropic and anisotropic spatial correlation structures
- The role of Unstructured Meshes: Exploration of flexible, robust, and adaptive geometric modeling, coupled with stochastic simulation algorithms
- Filling the Mesh: Developing a compact and tangible spatial model, that incorporates all alternative realizations, statistics, and uncertainty
The course will offer a comprehensive understanding of key steps to create a spatial predictive model with geostatistics. We will also promote MUSE (Modeling Uncertainty as a Support for Environments) (Miola et al., STAG2022) as an innovative and user-friendly open-source software, that implements the entire methodology. Tips on how to use MUSE will be provided, along with explanations of its structure and executable commands. Impactful examples will be used to show the effectiveness of geostatistical modeling with MUSE and the flexibility to use it in different scenarios, varying from geology to geochemistry.
The course is designed for everyone interested in geostatistics and spatial distribution models, regardless of their prior experience.
Since the breakthrough of datacubes as a contributor to Analysis-Ready Data, a series of implementations have been announced, and likewise services. However, often these are described through publications only.
In this session, hands-on demos are mandatory. Speakers must spend maximum 50% of their time on presenting slides etc., and minimum 50% to live demos of their tools. To guarantee fair and equal conditions, only in-person presentation will be allowed. Presenters are invited to share their latest and greatest functionality and data, but must balance this with their confidence that the demo will work out of the box.
This enables the audience to see first-hand what datacube features are effectively implemented, how stable they are under strong timing conditions, and what their real-life performance. The expected outcome for attendees is to get a realistic overview on the datacube tools and service landscape, and to assess how much each tool supports their individual needs, such as analysis-readiness.
In a changing climate world, extreme weather and climate events have become more frequent and severe, and are expected to continue increasing in this century and beyond. Unprecedented extremes in temperature, heavy precipitation, droughts, storms, river floodings and related hot and dry compound events have increased over the last decades, impacting negatively broad socio-economic spheres (such as agriculture), producing several damages to infrastructure, but also putting in risk human well-being, to name but a few. The above have raised many concerns in our society and within the scientific community about our current climate but our projected future. Thus, a better understanding of the climate and the possible changes we will face, is strongly needed. . In order to give answers to those questions, and address a wide range of uncertainties, very large data volumes are needed across different spatial (from local-regional to global) and temporal scales (past, current, future), but sources are multiple (observations, satellite, models, reanalysis, etc), and their resolution may vary each other. To deal with huge amounts of information, and take advantage of their different resolution and properties, high-computational techniques within Artificial Intelligence models are explored in climate and weather research. In this short-course, a novel method using Deep Learning models to detect and characterize extreme weather and climate events will be presented. This method can be applied to several types of extreme events, but a first implementation on which we will focus in the short-course, is its ability to detect past heatwaves. Discussions will take place on the method, and also its applicability to different types of extreme events. The course will be developed in python, but we encourage the climate and weather community to join the short-course and the discussion!
Machine Learning (ML) is on the rise as a tool for cryospheric sciences. It has been used to label, cluster, and segment cryospheric components, as well as emulate, project, and downscale cryospheric processes. To date, the cryospheric community mainly adapts and develops ML approaches for highly specific domain tasks. However, different cryospheric tasks can face similar challenges, and when an ML method addresses one problem, it might be transferable to others. Thus, we invite the community to share their current work and identify potential shared challenges and tasks. We invite contributions across the cryospheric domain, including snow, permafrost, glaciers, ice sheets, and sea ice. We especially call for submissions that use novel machine learning techniques; however, we welcome all ML approaches, ranging from random forests to deep learning. Other contributions, such as datasets, theoretical research, and community-building efforts, are also welcome. By identifying shared challenges and transferring knowledge, we aim to channel resources and increase the impact of ML as a tool to observe, assess, and model the cryosphere.
In the era of big data, Environmental and Earth Sciences depend on advanced digital tools, seamless data integration, and interdisciplinary collaboration to tackle pressing global challenges. Sensors, surveys, and experiments produce vast quantities of data, researchers face increasing demands for effective data sharing, large computational resources, and interoperability. Research and e-Infrastructures, together with Virtual Research Environments (VREs) are transforming science by providing cohesive ecosystems that enable global collaboration, support the full research lifecycle, and facilitate shared access to data, computational resources, and communication networks.
We particularly encourage case studies demonstrating how researchers, data scientists, and engineers have successfully utilised infrastructures such as ENVRIs, the European Open Science Cloud (EOSC), and e-Infrastructures such as EGI and D4Science, as well as other relevant platforms and frameworks supporting interdisciplinary collaboration. Experiences from Virtual Access and Transnational Access programmes are also of interest, as are discussions on policies for infrastructure utilisation, software implementation, and lessons learned.
By highlighting collaborative frameworks, this session aims to stimulate discussion on how VREs and e-Infrastructures can enhance interdisciplinary research, streamline workflows, and deliver solutions to complex global environmental challenges. Join us as we explore how these digital ecosystems are helping innovate research in Environmental and Earth Sciences, enabling faster, more accurate models and fostering a global scientific community dedicated to addressing pressing ecological and climate-related issues.
Researchers in Earth System Science (ESS) address complex, interdisciplinary challenges that require analysis of diverse data across multiple scales. Robust and user-friendly Research Data Infrastructures (RDIs) are crucial for supporting data management and collaborative analysis, addressing societal issues. This session will explore how RDIs can bridge the gap between user needs and sustainable ESS data solutions by fostering interdisciplinary collaboration and addressing key challenges in data management and interoperability.
We welcome contributions on the following themes:
- User-Centric Infrastructure Development: This includes user stories, storylines and use cases that demonstrate the importance of cross-disciplinary and cross-scale data usage, as well as innovative infrastructure concepts designed to meet specific user needs. This includes methods for developing high-quality user interfaces and portals.
- Interdisciplinary data fusion and stakeholder engagement: Contributions are welcome that address how RDIs and data centers can facilitate the seamless integration of diverse ESS data to tackle complex societal challenges. This includes exploring interdisciplinary data fusion techniques, strategies for engaging different stakeholders and approaches for integrating stakeholder knowledge into RDI development and data management practices.
- Sustainable software solutions and interoperability: This theme focuses on approaches to building and reusing sustainable software solutions that meet the diverse needs of ESS researchers, including interoperability challenges between different data sources and platforms, and considering appropriate building blocks. It also includes discussion of operation and sustainability models for diverse ESS data centers and strategies for fostering cooperation and interoperability.
- Transdisciplinary research and public engagement: We encourage contributions that explore how RDIs can support transdisciplinary research on sustainability challenges (e.g., climate change and its impacts, etc.) and facilitate public engagement with ESS issues through initiatives such as citizen science.
- Fostering cultural change and collaboration: This theme focuses on strategies for promoting cultural change within research communities to encourage data sharing, collaboration, and the adoption of FAIR principles. This also includes approaches to international collaboration and the development of effective collaboration patterns.
Seismological and Geophysical research consistently uses sophisticated tools for data analysis, modelling, and interpretation. Evidently, the rapid development and diversification of research software pose challenges in maintaining code quality, ensuring comprehensive documentation, achieving reproducibility of results, and enabling uninterrupted workflows comprising various tools for seamless data analysis. As researchers increasingly rely on complex computational tools, it becomes essential to address these challenges in scientific software development, to avoid inefficiencies and errors and to ensure that scientific findings are reliable and can be built upon by future researchers.
We welcome contributions that introduce software tools/toolboxes and their real-world applications, showcasing how they have advanced the field, providing practical insights into the development/application process. Additionally, we seek presentations that discuss methodologies for software testing, continuous integration in software projects, upgrades and deployment. Moreover, we are looking for case studies demonstrating the successful implementation of these tools in various seismological/geophysical problems and how these can bring value to the community.
Sharing of resources, toolboxes, and knowledge is encouraged to improve the overall quality and (re)usability of research software. We encourage the inclusion of demonstrations to showcase usability and functionality examples, as well as videos to illustrate proposed workflows. Videos and other resources can be added as supplementary material and will be available after the conference. Depending on the technical setup and the time available, we will also support live demonstrations for the on-site participants.
We warmly invite seismologists, geophysicists, software developers, and researchers to participate in this session and share their insights, experiences, and solutions to elevate software development standards and practices in our field. Join us to contribute to and learn from discussions that will drive innovation and excellence in seismological and geophysical research.
Recent Earth System Sciences (ESS) datasets, such as those resulting from high-resolution numerical modelling, have increased both in terms of precision and size. These datasets are central to the advancement of ESS for the benefit of all stakeholders, and public policymaking on climate change. Extracting the full value from these datasets requires novel approaches to access, process, and share data. It is apparent that datasets produced by state-of-the-art applications are becoming so large that even current high-capacity data infrastructures are incapable of storing, let alone ensuring their usability. With future investment in hardware being limited, a viable way forward is to explore the possibilities of data compression and new data space implementation.
Data compression has gained interest for making data more manageable, speeding up transfer times, and reducing resource needs without affecting the quality of scientific analyses. Reproducing recent ML and forecasting results has become essential for developing new methods in operational settings. At the same time, replicability is a major concern for ESS and downstream applications and the necessary data accuracy needs further investigation. Research on data reduction and prediction interpretability helps improve understanding of data relationships and prediction stability.
In addition, new data spaces are being developed in Europe, such as the Copernicus Data Space Ecosystem and Green Deal Data Space, as well as multiple national data spaces. These provide access to data, through streamlined access, cloud processing and online visualization generating actionable knowledge enabling more effective decision-making. Analysis ready data can easily be accessed via API transforming data access and processing scalability. Developers and users will share opportunities and challenges of designing and using data spaces for research and industry.
This session connects developers and users of ESS big data, discussing how to facilitate the sharing, integration, and compression of these datasets, focusing on:
1) Approaches and techniques to enhance shareability of high-volume ESS datasets: data compression, novel data space implementation and evolution.
2) The effect of reduced data on the quality of scientific analyses.
3) Ongoing efforts to build data spaces and connect with existing initiatives on data sharing and processing, and examples of innovative services that can be built upon data spaces.
Cloud computing has emerged as a dominant paradigm, supporting industrial applications and academic research on an unprecedented scale. Despite its transformative potential, transitioning to the cloud continues to challenge organizations striving to leverage its capabilities for big data processing. Integrating cloud technologies with high-performance computing (HPC) unlocks powerful possibilities, particularly for computation-intensive AI/ML workloads. With innovations like GPUs, containerization, and microservice architectures, this convergence enables scalable solutions for Earth Observation (EO) and Earth System Modeling domains.
Pangeo (pangeo.io) represents a global, open-source community of researchers and developers collaborating to tackle big data challenges in geoscience. By leveraging a range of tools—from laptops to HPC and cloud infrastructure—the Pangeo ecosystem empowers researchers with an array of core packages, including Xarray, Dask, Jupyter, Zarr, Kerchunk, and Intake.
This session focuses on use cases involving both Cloud and HPC computing and showcasing applications of Pangeo’s core packages. The goal is to assess the current landscape and outline the steps needed to facilitate the broader adoption of cloud computing in Earth Observation and Earth Modeling data processing. We invite contributions that explore various cloud computing initiatives within these domains, including but not limited to:
This session aims to:
• Assess the current landscape and outline the steps needed to facilitate the broader adoption of cloud computing in Earth Observation and Earth Modeling data processing.
• Inspire researchers using or contributing to the Pangeo ecosystem to share their insights with the broader geoscience community and showcasenew applications of Pangeo tools addressing computational and data-intensive challenges.
We warmly welcome contributions that explore:
• Cloud Computing Initiatives: Federations, scalability, interoperability, multi-provenance data, security, privacy, and sustainable computing.
• Cloud Applications and Platforms: Development and deployment of IaaS, PaaS, SaaS, and XaaS solutions.
• Cloud-Native AI/ML Frameworks: Tools designed for AI/ML applications in EO and ESM.
• Operational Systems and Workflows: Cloud-based operational systems, data lakes, and storage solutions.
• HPC and Cloud Integration: Converging workloads to leverage the strengths of both computational paradigms.
In addition, we invite presentations showcasing applications of Pangeo’s core packages in:
• Atmosphere, Ocean, and Land Modeling
• Satellite Observations
• Machine Learning
• Cross-Domain Geoscience Challenges
This session emphasizes real-world use cases at the intersection of cloud and HPC computing. By sharing interactive workflows, reproducible research practices, and live executable notebooks, contributors can help map the current landscape and outline actionable pathways toward broader adoption of these transformative technologies in geoscience.
Mitigating earthquake disasters involves several key components and stages, from identifying and assessing risk to reducing their impact. These components include: a) Long-term and time-dependent analysis of hazards: anticipating the space-time characteristics of ground shaking and its cascading events. b) Vulnerability and exposure assessment c) Risk management: preparedness, rescue, recovery, and overall resilience. A variety of seismic hazard and risk models can be adopted, at different spatial and temporal scale, that incorporate diverse observations and require multi-disciplinary input. Testing and validating these methodologies, for all risk components, is essential for effective disaster mitigation.
From the real-time integration of multi-parametric observations is expected the major contribution to the development of operational time-Dependent Assessment of Seismic Hazard (t-DASH) systems, suitable for supporting decision makers with continuously updated seismic hazard scenarios. A very preliminary step in this direction is the identification of those parameters (seismological, chemical, physical, etc.) whose space-time dynamics and/or anomalous variability can be, to some extent, associated with the complex process of preparation of major earthquakes.
This session includes studies on various aspects of seismic risk research and assessment, observations and/or data analysis methods within the t-DASH and Short-term Earthquakes Forecast perspectives:
- Studies on time-dependent seismic hazard and risk assessments
- Development of physical/statistical models and studies based on long-term data analyses, including different conditions of seismic activity
- Application of AI to assess earthquake risk factors (hazard, exposure, and vulnerability). Exploring innovative data collection and processing techniques, such as statistical machine learning
- Estimating earthquake hazard and risk across different temporal and spatial scales and assessing the accuracy of these models against available observations
- Earthquake-induced cascading effects such as landslides and tsunamis, and multi-risk assessments
- Studies devoted to the description of genetic models of earthquake’s precursory phenomena
- Infrastructures devoted to maintain and further develop our present observational capabilities of earthquake related phenomena also contributing to build a global multi-parametric Earthquakes Observing System (EQuOS) to complement the existing GEOSS initiative
Database documentation and sharing is a crucial part of the scientific process, and more scientists are choosing to share their data on centralised data repositories. These repositories have the advantage of guaranteeing immutability (i.e., the data cannot change), which is not so amenable to developing living databases (e.g., in continuous citizen science initiatives). At the same time, citizen science initiatives are becoming more and more popular in various fields of science, from natural hazards to hydrology, ecology and agronomy.
In this context, distributed databases offer an innovative approach to both data sharing and evolution. These systems have the distinct advantage of becoming more resilient and available as more users access the same data, and as distributed systems, contrarily to decentralised ones, do not use blockchain technology, they are orders of magnitude more efficient in data storage as well as completely free to use. Distributed databases can also mirror exising data, so that scientists can keep working in their preferred Excel, OpenOffice, or other software while automatically syncing database changes to the distributed web in real time.
This workshop will present the general concepts behind distributed, peer-to-peer systems. Attendees will then be guided through an interactive activity on Constellation, a scientific software for distributed databases, learning how to both create their own databases as well as access and use others' data from the network. Potential applications include citizen science projects for hydrological data collection, invasive species monitoring, or community participation in managing natural hazards such as floods.
Interdisciplinarity is becoming a common approach to solve socio-ecological problems, but datasets from different disciplines often lack interoperability. In this SC we will explore interoperability levels in the context of integrated research infrastructure services for the climate change crisis.
WEkEO offers a single access point to all of the environmental data provided by the Copernicus programme, as well as additional data from its four partner organisations. While data access is the first step for research based on EO data, the challenges of handling data soon become overwhelming with the increasing volume of Earth Observation data available. To cope with this challenge and to tame the Big Earth Data, WEkEO offers a cloud-based processing service for Earth Observation data coming from the Copernicus programme and beyond.
This course will explain new trends and developments in accessing, analysing and visualizing earth observation data by introducing concepts around serverless processing, parallel processing of big data and data cube generation in the cloud.
The session will begin with a theoretical introduction to cloud-based big data processing and data cube generation, followed by a demonstration how the participants can utilize these concepts within the WEkEO environment using its tools. Participants will have the opportunity to apply the concepts and tools in multi-disciplinary environmental use cases bringing together different kinds of satellite data and earth observation products as data cubes in the cloud.
The course will start with a beginner-level introduction and demonstration before introducing more advanced functionalities of the WEkEO services. Prior knowledge of satellite data analysis/Python programming would be an advantage but is not a prerequisite. Comprehensive training material will be provided during the course to ensure that participants with varying degree of knowledge of data processing can follow and participate.
The analysis and visualisation of data is fundamental to research across the earth and space sciences. The Pangeo (https://pangeo.io) community has built an ecosystem of tools designed to simplify these workflows, centred around the Xarray library for n-dimensional data handling and Dask for parallel computing. In this short course, we will offer a gradual introduction to the Pangeo toolkit, through which participants will learn the skills required to scale their local scientific workflows through cloud computing or large HPC with minimal changes to existing codes.
The course is beginner-friendly but assumes a prior understanding of the Python language. We will guide you through hands-on jupyter notebooks that showcase scalable analysis of in-situ, satellite observation and earth system modelling datasets to apply your learning. By the end of this course, you will understand how to:
- Efficiently access large public data archives from Cloud storage using the Pangeo ecosystem of open source software and infrastructure.
- Leverage labelled arrays in Xarray to build accessible, reproducible workflows
- Use chunking to scale a scientific data analysis with Dask
All the Python packages and training materials used are open-source (e.g., MIT, Apache-2, CC-BY-4). Participants will need a laptop and internet access but will not need to install anything. We will be using the free and open Pangeo@EOSC (European Open Sicence Cloud) platform for this course. We encourage attendees from all career stages and fields of study (e.g., atmospheric sciences, cryosphere, climate, geodesy, ocean sciences) to join us for this short course. We look forward to an interactive session and will be hosting a Q&A and discussion forum at the end of the course, including opportunities to get more involved in Pangeo and open source software development. Join us to learn about open, reproducible, and scalable Earth science!
Preparation: We recommend learners with no prior knowledge of Python review resources such as the Software Carpentry training material and Project Pythia in advance of this short course. Participants should bring a laptop with an internet connection. No software installation is required as resources will be accessed online using the Pangeo@EOSC platform. Temporary user accounts will be provided for the course and we will also teach attendees how to request an account on Pangeo@EOSC to continue working on the platform after the training course.
Addressing global environmental and socio-technical challenges requires interdisciplinary, data-driven approaches. Today’s research produces unprecedented volumes and complexity of value-added research data and an increasing number of interactive data services, putting traditional information management systems to the test. Collaborative infrastructures are challenged by their dual role of advancing research and scientific assessments while facilitating transparent data and software sharing.
Since the breakthrough of datacubes as a contributor to Analysis-Ready Data, a series of implementations have been announced, and likewise services. However, often these are described through publications only and without publicly accessible deployments to evaluate.
We invite abstracts from all data stakeholders that highlight innovative platforms, frameworks, datacube tools, services, systems, and initiatives designed to enhance access and usability of data for research on topics such as climate change, natural hazards, sustainable development, etc. We welcome presentations describing collaborations across national and disciplinary boundaries as well as live demos of datacube tools and services that contribute to building trustworthy and interoperable data networks, guided by UNESCO’s Open Science recommendations, the FAIR and CARE data principles. The expected outcome for attendees is to get a realistic overview on the datacube tools, service landscape and ongoing collaborations that enable researchers worldwide to address pressing global problems through data.
Public information:
The session is organized in two time blocks, the first focussing on collaboration and the second focussiong on tool aspects of Open Science.
Solicited authors:
Colin Price,Reyna Jenkyns
Co-organized by ERE1/GI2, co-sponsored by
AGU and JpGU
Almost a decade ago, the FAIR data guiding principles were introduced to the broader research community. These principles proposed a framework to increase the reusability of data in and across domains during and after the completion of e.g. research projects. In subdomains of the Earth System Sciences (ESS), like atmospheric sciences or partly geosciences, data reuse across institutions and geographical borders was already well-established, supported by community-specific and cross-domain standards like netCDF-CF, geospatial standards (e.g.OGC). Further, authoritative data producers such as CMIPs were already using Persistent Identifiers and corresponding handle systems for data published in their repositories – so it was often thought and communicated this data is “FAIR by design”.
However, fully implementing FAIR principles, particularly machine-actionability—the core idea behind FAIR—has proven challenging. Despite progress in awareness, standard-compliant data sharing, and the automation of data provenance, the ESS community continues to struggle to reach a community-wide consensus on the design, adoption, interpretation and implementation of the FAIR principles.
In this session, we invite contributions from all fields in Earth System Sciences that provide insights, case studies, and innovative approaches to advancing the adoption of the FAIR data principles. We aim to foster a collaborative dialogue on the progress our community has made, the challenges that lie ahead, and the strategies needed to achieve widespread acceptance and implementation of these principles, ultimately enhancing the future of data management and reuse.
We invite contributions focusing on, but not necessarily limited to,
- Challenges and solutions in interpreting and implementing the FAIR principles in different sub-domains of the ESS
- FAIR onboarding strategies for research communities
- Case studies of successful FAIR data implementation (or partial implementation) in ESS at infrastructure and research project level
- Methods and approaches to gauge the impact of FAIR data implementation in ESS
- Considerations on how AI might help to implement FAIR
- Future direction for FAIR data in ESS
Performing research in Earth System Science is increasingly challenged by the escalating volumes and complexity of data, requiring sophisticated workflow methodologies for efficient processing and data reuse. The complexity of computational systems, such as distributed and high-performance heterogeneous computing environments, further increases the need for advanced orchestration capabilities to perform and reproduce simulations effectively. On the same line, the emergence and integration of data-driven models, next to the traditional compute-driven ones, introduces additional challenges in terms of workflow management. This session delves into the latest advances in workflow concepts and techniques essential to address these challenges taking into account the different aspects linked with High-Performance Computing (HPC), Data Processing and Analytics, and Artificial Intelligence (AI).
In the session, we will explore the importance of the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles and provenance in ensuring data accessibility, transparency, and trustworthiness. We will also address the balance between reproducibility and security, addressing potential workflow vulnerabilities while preserving research integrity.
Attention will be given to workflows in federated infrastructures and their role in scalable data analysis. We will discuss cutting-edge techniques for modeling and data analysis, highlighting how these workflows can manage otherwise unmanageable data volumes and complexities, as well as best practices and progress from various initiatives and challenging use cases (e.g., Digital Twins of the Earth and the Ocean).
We will gain insights into FAIR Digital Objects, (meta)data standards, linked-data approaches, virtual research environments, and Open Science principles. The aim is to improve data management practices in a data-intensive world.
On these topics, we invite contributions from researchers illustrating their approach to scalable workflows as well as data and computational experts presenting current approaches offered and developed by IT infrastructure providers enabling cutting edge research in Earth System Science.
In recent decades, the advent in geoinformation technology has played an increasingly important role in determining various parameters that characterize the Earth's environment. These technologies often combined with conventional field surveying and spatial data analysis methods and/or simulation process models provide efficient means for monitoring and understanding Earth’s environment in a cost-effective and systematic manner. This session invites contributions focusing on modern open-source software tools developed to facilitate the analysis of mainly geospatial data in any branch of geosciences for the purpose of better understanding Earth’s natural environment. We encourage the contribution of any kind of open source tools, including those that are built on top of global used commercial GIS solutions. Potential topics for the session include the presentation of software tools developed for displaying, processing and analysing geospatial data and modern cloud webGIS platforms and services used for geographical data analysis and cartographic purposes. We also welcome contributions that focus on presenting tools that make use of parallel processing on high performance computers (HPC) and graphic processing units (GPUs) and also on simulation process models applied in any field of geosciences.
Code is read far more often than it's written, yet some still believe that complex, unreadable code equates to a better algorithm. In reality, the opposite is true. Writing code that not only works but is also clear, maintainable, and easy to modify can significantly reduce the cognitive load of coding, freeing up more time for scientific research. This short course introduces essential programming practices, from simple yet powerful techniques like effective naming, to more advanced topics such as unit testing, version control, and managing virtual environments. Through real-life examples, we will explore how to transform code from convoluted to comprehensible.
Software plays a pivotal role in various scientific disciplines. Research software may include source code files, algorithms, computational workflows, and executables. It refers mainly to code meant to produce data, less so, for example, plotting scripts one might create to analyze this data. An example of research software in our field are computational models of the environment. Models can aid pivotal decision-making by quantifying the outcomes of different scenarios, e.g., varying emission scenarios. How can we ensure the robustness and longevity of such research software? This short course teaches the concept of sustainable research software. Sustainable research software is easy to update and extend. It will be easier to maintain and extend that software with new ideas and stay in sync with the most recent scientific findings. This maintainability should also be possible for researchers who did not originally develop the code, which will ultimately lead to more reproducible science.
This short course will delve into sustainable research software development principles and practices. The topics include:
- Properties and metrics of sustainable research software
- Writing clear, modular, reusable code that adheres to coding standards and best practices of sustainable research software (e.g., documentation, unit testing, FAIR for research software).
- Using simple code quality metrics to develop high-quality code
- Documenting your code using platforms like Sphinx for Python
We will apply these principles to a case study of a reprogrammed version of the global WaterGAP Hydrological Model (https://github.com/HydrologyFrankfurt/ReWaterGAP). We will showcase its current state in a GitHub environment along with example source code. The model is written in Python but is also accessible to non-python users. The principles demonstrated apply to all coding languages and platforms.
This course is intended for early-career researchers who create and use research models and software. Basic programming or software development experience is required. The course has limited seats available on a first-come-first-served basis.
Python is one of the fastest growing programming languages and has moved to the forefront in the earth system sciences (ESS), due to its usability, the applicability to a range of different data sources and, last but not least, the development of a considerable number of ESS-friendly and ESS-specific packages.
This interactive Python course is aimed at ESS researchers who are interested in adding a new programming language to their repertoire. Except for some understanding of fundamental programming concepts (e.g. loops, conditions, functions etc.), this course presumes no previous knowledge of and experience in Python programming.
The goal of this course is to give the participants an introduction to the Python fundamentals and an overview of a selection of the most widely-used packages in ESS. The applicability of those packages ranges from (simple to advanced) number crunching (e.g. Numpy), to data analysis (e.g. Xarray, Pandas) to data visualization (e.g. Matplotlib).
The course will be grouped into different sections, based on topics discussed, packages introduced and field of application. Furthermore, each section will have an introduction to the main concepts e.g. fundamentals of a specific package and an interactive problem-set part.
This course welcomes active participation in terms of both on-site/virtual discussion and coding. To achieve this goal, the i) course curriculum and material will be provided in the form of Jupyter Notebooks ii) where the participants will have the opportunity to code up the iii) solutions to multiple problem sets and iv) have a pre-written working solution readily available. In these interactive sections of the course, participants are invited to try out the newly acquired skills and code up potentially different working solutions.
We very much encourage everyone who is interested in career development, data analysis and learning a new programming language to join our course.
Julia offers a fresh approach to scientific computing, high-performance computing and data crunching. Recently designed from the ground up, Julia avoids many of the weak points of older, widely used programming languages in science such as Python, Matlab, and R. Julia is an interactive scripting language, yet it executes with similar speed as C(++) and Fortran. Its qualities make it an appealing tool for the geoscientist.
Julia has been gaining traction in the geosciences over the last years in applications ranging from high-performance simulations, data processing, geostatistics, machine learning, differentiable programming to scientific modelling. The Julia package ecosystem necessary for geosciences has substantially matured, which makes it readily usable for research.
This course provides a hands-on introduction to get you started with Julia as well as a showcase of geodata visualisation and ocean, atmosphere and ice simulations.
The hands-on introduction will cover:
- learn about the Julia language and what sets it apart from others
- write simple Julia code to get you started with scientific programming (arrays, loops, input/output, etc.)
- installing Julia packages and management of package environments (similar, e.g., to virtual-environments in Python)
The show-case will feature a selection of:
- Visualisation of Geo-Data using the plotting library Makie.jl and various geodata libraries
- Global ocean modelling with Oceananigans.jl on CPUs and GPUs
- Interactive atmospheric modelling with SpeedyWeather.jl
- Ice flow modelling and data integration and sensitivity analysis using automatic differentiation on GPUs with FastIce.jl
Ideally, participants should install Julia on their laptops to allow a smooth start into the course. We will provide detailed documentation for this installation. However, we will also provide a JupyterHub, albeit connectivity to it maybe spotty depending on Wi-Fi reception.
Imagine you have a wonderful and ancient butterfly collection that you would like to catalogue and digitalize. And in such a way that other scientists have added value from it. What additional considerations would you need, apart from the technical requirements? What’s the right way to describe your collection, so that not only specialists can find the data, but also an engineer, looking for inspiration for new aerodynamic concepts or an artist, looking for specific, natural colours, patterns or shapes.
Working together with specialists from your own research field is easy – you think in the same way, use the same language, the same vocabulary. It's the same in every discipline, in every field of research, and it's the interdisciplinarity that is the challenge. In Earth System Science the number of disciplines is huge, as is the number of well known community standards, controlled vocabularies, ontologies … without the translation (mapping) from one discipline to another, without some help/tool to collect and find such terminologies – it’s chaos.
In this course we want to analyze together the gaps that arise in your daily research activities due to interdisciplinary work and projects. What are the misunderstandings, the data misinterpretations you are facing? In an interactive process you will learn more about a solution to bring order to the chaos - terminologies and the services around them. The aim is to make it easier for you to produce and/or understand reusable data in the future. FAIR is more than a buzzword - let's bring it to life!
The ongoing atmospheric warming has a huge effect on the components of terrestrial cryosphere. Therefore, there is a great need to have operational monitoring networks to understand the impacts and long-term changes of the cryosphere. Global Observing Monitoring Systems recognize three major Essential Climate Variables in permafrost, which are permafrost temperature, active layer thickness and rock glacier velocity. The monitoring of these parameters is covered by groups of Global Terrestrial Network – Permafrost (GTN-P) and Rock Glacier Inventories and Kinematics (RGIK), which will collaborate on the Short Course organisation. Our aim is to provide the participants: a) general background on GTN-P and RGIK activities b) The latest updates and demonstration of new version of GTN-P database c) Current development of RGIK database and monitoring standards
Solicited authors:
Anna Irrgang,Tillmann Lübker,Cécile Pellet,Sebastian Laboor,Sebastián Vivero
Structural geologic modeling is a crucial part of many geoscientific workflows. There is a wide variety of applications, from geological research to applied fields such as geothermics, geotechnical engineering, and natural resource exploration. This short course introduces participants to GemPy, a powerful and successful open-source Python library for 3D geological modeling. GemPy allows users to generate geological models efficiently and integrates well with the Python ecosystem, making it a valuable resource for both researchers and industry professionals.
The course will cover the following topics:
(1) Modeling approach and theoretical background: An introduction to the principles behind implicit structural geological modeling.
(2) Creating a GemPy model: A step-by-step guide to building a basic geological model using GemPy.
(3) Open Source Project: An overview of GemPy's development, community, and opportunities for contribution.
(4) Outlook and applications: Exploration of the wide-ranging applications of GemPy in various geoscientific fields, including the link to geophysical inversion. Coding examples for advanced functionalities and examples from publications.
This interactive course is designed for both beginners and those with some experience in geological modeling. Participants are encouraged to bring a laptop to actively engage in the tutorials and apply their newly acquired skills by writing their own code. While basic Python knowledge and a pre-installed Python environment are beneficial, they are not mandatory for participation.
In April 2023, EPOS, the European Plate Observing System launched the EPOS Data Portal (https://www.ics-c.epos-eu.org/), which provides access to multidisciplinary data, data products, services and software from solid Earth science domain. Currently, ten thematic communities provide input to the EPOS Data Portal through services (APIs): Anthropogenic Hazards, Geological Information and Modelling, Geomagnetic Observations, GNSS Data and Products, Multi-Scale Laboratories, Near Fault Observatories, Satellite Data, Seismology, Tsunami and Volcano Observations.
The EPOS Data Portal enables search and discovery of assets thanks to metadata and visualisation in map, table or graph views, including download of the assets, with the objective to enable multi-, inter- transdisciplinary research by following FAIR principles.
This short course will provide an introduction to the EPOS ecosystem, demonstration of the EPOS Data Portal and hands-on training by following a scientific use case using the online portal. It is expected that participants have scientific background in one or more scientific domains listed above.
The training especially targets young researchers and all those who need to combine multi-, inter- and transdisciplinary data in their research. The use of the EPOS data Portal will simplify data search for Early Career Scientists and potentially help them in accelerating their career development.
Feedback from participants will be collected and used for further improvements of the Data Portal.
OpenStreetMap (OSM) is probably the widest and most known crowdsourced database of geospatial information. Its data have the potential to be harnessed to address a variety of scientific and policy questions, from urban planning to demographic studies, environmental monitoring, energy simulations and many others.
Understanding the structure and the variety of content in OSM can enable researchers and policymakers to use it as a relevant dataset for their specific objectives.
Moreover, familiarity with tools and services for filtering and extracting data per geographic area or topic can empower users to tailor OSM data to meet their unique needs. Additionally, learning to contribute new data to OSM enriches the database and fosters a collaborative environment that supports ongoing geospatial research and community engagement both for researchers themselves and also in interactions with stakeholders and citizens. By actively participating in the OSM community, geoscientists can ensure that the data remains current and relevant, ultimately enhancing the impact of their work in addressing pressing environmental and societal challenges.
The short course will begin with an introduction to the concepts and content of OpenStreetMap, followed by a brief review of services and tools for filtering, extracting, and downloading data. Participants will engage in hands-on activities to contribute new data directly, along with hints and tips on how to understand and evaluate the pros and cons of its open and collaborative foundational principles.
The advancement of Open Science and the affordability of computing services allow for the discovery and processing of large amounts of information, boosting data integration from diverse scientific domains and blurring traditional discipline boundaries. However, data are often heterogeneous in format and provenance, and the capacity to combine them and extract new knowledge to address scientific and societal problems relies on standardisation, integration and interoperability.
Key enablers of the OS paradigm are ESFRI Research infrastructures, of which ECCSEL (www.eccsel.org), EMSO (https://emso.eu/) and EPOS (www.epos-eu.org), are examples currently enhancing FAIRness and integration within the Geo-INQUIRE project. Thanks to decades of work in data standardisation, integration and interoperability, they enable scientists to combine data from different disciplines and data sources into innovative research to solve scientific and societal questions.
But while data-driven science is ripe with opportunity to groundbreaking inter- and transdisciplinary results, many challenges and barriers remain.
This session aims to foster scientific cross-fertilization exploring real-life scientific studies and research experiences from scientists and ECS in Environmental Sciences. We also welcome contributions about challenges in connection to data availability, collection, processing, interpretation, and the application of interdisciplinary methods.
A non-exhaustive list of of topics includes:
- multidisciplinary studies involving data from different disciplines, e.g. combining seismology, geodesy, oceanography and petrology to understand subduction zone dynamics;
- interdisciplinary works, integrating two or more disciplines to create fresh approaches, e.g. merging solid earth and ocean sciences data to study coastal/oceanic areas and earth dynamics;
- showcase activities enabling interdisciplinarity and open science, e.g. enhancing FAIRness of data and services, enriching data provision, enabling cross-domain AI applications, software and workflows, transnational access and capacity building for ECS;
- transdisciplinary experiences that surpass disciplinary boundaries, integrate paradigms and engage stakeholders from diverse backgrounds, e.g. bringing together geologists, social scientists, civil engineers and urban planners to define risk maps and prevention measures in urban planning, or studies combining volcanology, atmospheric, health and climate sciences.
The Earth System is a complex and dynamic network involving interactions between the atmosphere, oceans, land, and biosphere. Understanding and analyzing data from the Earth System Model is crucial for predicting and mitigating the impacts of climate change.
In this session, we aim to address the challenges of data accessibility and promote integrated use by opening gateways for Earth and environmental sciences. It is essential to enable communication, integration, and data processing for multidisciplinary applications across different fields. This session will highlight current or emerging initiatives, standards, software, or complete IT systems that support environmental data science for a better understanding of the Earth System. The services presented must adhere to the FAIR principles, promoting more robust and transparent research. Additionally, the session will showcase efforts to implement FAIR standards, which are essential for seamless data integration and innovation in Earth Sciences.
We welcome contributions from any origin—whether a European (or not) project, research infrastructure, institute, association, or individual researcher—that presents a vision, service, IT infrastructure, software stack, standard, etc.
In particular, we encourage submission that demonstrate how we can :
- Enhance data by adding semantic annotations or adhering to standards, and ensure its availability through appropriate and efficient services.
- Ensure that metadata, data formats, and access policies align with universally accepted standards to maintain consistency and interoperability.
- Provide tools for analyzing, visualizing and exploring data through interfaces and virtual environments that are both standardized and user-friendly.
- Collaborate with end users to design and present essential information effectively.
- Provide clear and effective training to help users understand and adopt FAIR practices, ensuring they can utilize the services and tools efficiently.
This session brings together researchers and experts in geologic mapping and cartography, focusing on terrestrial, marine and planetary environments. We integrate the heritage of previous ESSI sessions about geologic mapping of extreme environments.
Despite significant advancements in mapping the remotest and most inhospitable places on Earth, vast areas remain unexplored due to technical and/or financial challenges. Some of these regions are crucial, as they hold the potential to uncover important geological and habitat information to facilitate future exploration and understanding of challenging environments.
Similarly, space exploration has seen numerous missions to Mars, the Moon and the bodies of our Solar System, and geologic mapping is needed to support science discovery and exploration.
Remote locations are commonly associated with the ocean floor, logistically challenging terrestrial regions, and or planetary surfaces. Extreme environmental conditions are found in hot deserts, under the ice, in high-mountain ranges, in volcanic edifices, hidden underneath dense canopy cover, or located within the near-surface crust. All such locations are prime targets for geologic mapping in a wide sense, where remote sensing data plays a critical role in driving investigation and sampling at specific sites.
This session invites contributions that explore innovative techniques and methods for field and remote sensing data collection, integration of diverse datasets, for the development of geological maps and the use of digital interoperable formats and protocols.
Topics discussed at the session will include:
- Ocean mapping using manned and unmanned vehicles and devices
- Planetary geologic mapping
- Offshore exploration using remote sensing techniques
- Geologic investigation of desert environments
- Innovative techniques for field and remote sensing data collection
- Integration of non-common datasets
- International cooperation
- Digital formats and interoperability/usability of geological maps
- Geologic mapping programs from national and space agencies
- Cartographic representation
- Derived or specialized maps
The goal of this session is to stimulate and foster collaboration between marine, terrestrial and planetary geoscientists, highlighting the similarities, differences, and open issues in geologic mapping. By sharing knowledge, techniques and methods, we aim to fill potential gaps and enhance our understanding of these environments.
Public information:
This session focuses on the importance of geologic maps in exploration. We tied together ocean floors geologic mapping to geologic mapping on planet Earth, to geologic mapping of the solid bodies of the solar system. Having techniques and methods presented in a same session will stimulate discussions and synergies that would not be possible in environment-specific symposia.
Geosciences encompass a wide range of disciplines, including the atmosphere, oceans, and geology, all of which depend on advanced mapping and visualization techniques to analyze and communicate increasingly complex datasets. These approaches support scientific research while enabling decision-making in areas such as environmental management, resource exploration, and hazard mitigation.
This session focuses on the intersection of mapping, modelling and visualization in geosciences, highlighting innovative tools, workflows, and applications.
We invite contributions addressing:
- Advanced techniques for mapping, analyzing, and visualizing data from all compartments and scales of the Earth;
- Geological field mapping;
- 2 D, 3 D and 4 D geological modelling;
- Co-designed tools that integrate interdisciplinary datasets for research and stakeholder engagement;
- Platforms and workflows for managing and visualizing high-frequency, high-dimensional, and metadata-enriched datasets;
- Harmonization of geoscience datasets using FAIR principles (e.g. cross-boundary data);
- Portrayal and presentation;
- Education and training initiatives for mapping, modelling, and visualization techniques.
By bringing together scientists, developers, and practitioners, this transdisciplinary session aims to foster a collaborative community, advancing mapping and visualization tools and ensuring their accessibility and impact across geosciences. The outcome will be applicable to and benefit various usages (e.g. mineral exploration geological risk assessment, protecting groundwater resources and coastal protection).
Understanding Earth's natural processes, particularly in the context of global climate change, has gained widespread recognition as an urgent and central research priority that requires further exploration. Recent advancements in satellite technology, characterized by new platforms with high revisit times and the growing capabilities for collecting repetitive ultra-high-resolution aerial images through unmanned aerial vehicles (UAVs), have ushered in exciting opportunities for the scientific community. These developments pave the way for developing and applying innovative image-processing algorithms to address longstanding and emerging environmental challenges.
The primary objective of the proposed session is to convene scientific researchers dedicated to the field of satellite and aerial time-series imagery. The aim is to showcase ongoing research efforts and novel applications in this dynamic area. This session is specifically focused on presenting studies centred around the creation and utilization of pioneering algorithms for processing satellite time-series data, as well as their applications in various domains of remote sensing, aimed at investigating long-term processes across all Earth's realms, including the sea, ice, land, and atmosphere.
In today's era of unprecedented environmental challenges and the ever-increasing availability of data from satellite and aerial sources, this session serves as a platform to foster collaboration and knowledge exchange among experts working on the cutting edge of Earth observation technology. By harnessing the power of satellite and aerial time-series imagery, we can unlock valuable insights into our planet's complex systems, ultimately aiding our collective efforts to address pressing global issues such as climate change, natural resource management, disaster mitigation, and ecosystem preservation.
The session organizers welcome contributions from researchers engaged in applied and theoretical research. These contributions should emphasize fresh methods and innovative satellite and aerial time-series imagery applications across all geoscience disciplines. This inclusivity encompasses aerial and satellite platforms and the data they acquire across the electromagnetic spectrum.
The coastal zone is globally of great environmental and economic importance, but the stability and sustainability of this region faces many threats. Climate-induced sea level rise, coastal erosion and flooding due to increased storms, and pollution and disturbance of ecosystems are all stresses shaping the present coastline and near-shore environments. These direct impacts on the coast are driving coastline management and marine policies worldwide.
These initiatives rely on key, up-to-date, and repeatable environmental information layers, which are required to effectively monitor coastal change and make informed and coordinated decisions on the sustainable use of coastal and marine resources, in alignment with climate strategies and the protection of coastal areas.
To address this need, advanced methodologies based on remote sensing are becoming more widely used. These techniques have benefited from the surge of Earth Observation data, and advancements in computational and classification algorithms. In recent years, together with the upsurge of cloud computing, there has been a growing focus on new challenges such as sensor and data fusion from multiple sources, and its potential application to effectively monitoring the changes in coastal environments.
This session calls for papers that advance our capability or understanding of the application of Earth Observation remote sensing to coastal zone monitoring, with specific interest in contributions that (1) develop novel methodologies or data fusion workflows in coastal geomorphology, near-shore satellite-derived bathymetry, coastal altimetry, coastal dynamics, water quality and coastal ecosystems (2) include validation and uncertainty budgets, (3) incorporate temporal resolution for monitoring and prediction of coastal change, and (4) impact a wide range of applications.
Sustainable agriculture and forestry face the challenges of lacking scalable solutions and sufficient data for monitoring vegetation structural and physiological traits, vegetation (a)biotic stress, and the impacts of environmental conditions and management practices on ecosystem productivity. Remote sensing from spaceborne, unmanned/manned airborne, and proximal sensors provides unprecedented data sources for agriculture and forestry monitoring across scales. The synergy of hyperspectral, multispectral, thermal, LiDAR, or microwave data can thoroughly identify vegetation stress symptoms in near real-time and combined with modeling approaches to forecast ecosystem productivity. This session welcomes a wide range of contributions on remote sensing for sustainable agriculture and forestry including, but not limited to: (1) the development of novel sensing instruments and technologies; (2) the quantification of ecosystem energy, carbon, water, and nutrient fluxes across spatial and temporal scales; (3) the synergy of multi-source and multi-modal data; (4) the development and applications of machine learning, radiative transfer modeling, or their hybrid; (5) the integration of remotely sensed plant traits to assess ecosystem functioning and services; (6) the application of remote sensing techniques for vegetation biotic and abiotic stress detection; and (7) remote sensing to advance nature-based solutions in agriculture and forestry for climate change mitigation. This session is inspired by the cost action program, Pan-European Network of Green Deal Agriculture and Forestry Earth Observation Science (PANGEOS, https://pangeos.eu/), which aims to leverage state-of-the-art remote sensing technologies to advance field phenotyping workflows, precision agriculture/forestry practices and larger-scale operational assessments for a more sustainable management of Europe’s natural resources.
This session covers climate predictions from seasonal to multi-decadal timescales and their applications. Continuing to improve such predictions is of major importance to society. The session embraces advances in our understanding of the origins of seasonal to decadal predictability and of the limitations of such predictions. This includes advances in improving forecast skill and reliability and making the most of this information by developing and evaluating new applications and climate services.
The session welcomes contributions from dynamical models, machine-learning or other statistical methods and hybrid approaches. It will investigate predictions of various climate phenomena, including extremes, from global to regional scales, and from seasonal to multi-decadal timescales (including seamless predictions). Physical processes and sources relevant to long-term predictability (e.g. ocean, cryosphere, or land) as well as predicting large-scale atmospheric circulation anomalies associated with teleconnections will be discussed. Analysis of predictions in a multi-model framework, and ensemble forecast initialization and generation will be another focus of the session. We are also interested in approaches addressing initialization shocks and drifts. The session welcomes work on innovative methods of quality assessment and verification of climate predictions. We also invite contributions on the use of seasonal-to-decadal predictions for risk assessment, adaptation and further applications.
This session combines two key aspects of the research concerning geoscientific instrumentation: the monitoring of water systems and of marginal and degraded areas.
Instrumentation and measurement technologies are currently playing a key role in the monitoring, assessment and protection of water resources.
The first part focuses on measurement techniques, sensing methods and data science implications for the observation of water systems, emphasizing the strong link between measurement aspects and computational aspects characterising the water sector.
We aim at providing an updated framework of the observational techniques, data processing approaches and sensing technologies for water management and protection, giving attention to today’s data science aspects, e.g. data analytics, big data and Artificial Intelligence.
We welcome contributions about field measurement approaches, development of new sensing techniques, low cost sensor systems and measurement methods enabling crowdsourced data collection.
Therefore, water quantity and quality measurements as well as water characterization techniques are within the scope of this session. Remote sensing techniques for the monitoring of water resources and/or the related infrastructures are also welcome. Contributions dealing with the integration of data from multiple sources are solicited, as well as the design of ICT architectures (including IoT-based networks).
Studies about signal and data processing techniques (including machine learning) and the integration between sensor networks and large data systems are also very encouraged.
The second part is devoted to a scientific/technological survey of observational strategies and sensing technologies for improving the quality of life and ensuring inclusivity of people in challenging social and economic contexts, such as marginal and degraded areas.
We welcome examples of the beneficial role of technological tools for the monitoring and protection of critical infrastructures (water, energy, transport), to ameliorate the inclusivity and ensure a correct exploitation of the resources, also in economic/social terms. We also focus on the exploitation of natural and cultural resources to improve the economy and quality of life in marginal areas, which in many cases are rural. Furthermore, attention will be devoted to the development and exploitation of low cost and scalable/portable sensing solutions for the monitoring of both large urban areas and poorly covered zones.
Remote sensing products have a high potential to contribute to monitoring and modelling of water resources. Nevertheless, their use by water managers is still limited due to lack of quality, resolution, trust, accessibility, or experience.
In this session, we look for new developments that support the use of remote sensing data for water management applications from local to global scales. We are looking for research to increase the quality of remote sensing products, such as higher spatial and/or temporal resolution mapping of land use and/or agricultural practices or improved assessments of river discharge, lake and reservoir volumes, groundwater resources, drought monitoring/modelling and its impact on water-stressed vegetation, as well as on irrigation volumes monitoring and modelling. We are interested in quality assessment of remote sensing products through uncertainty analysis or evaluations using alternative sources of data. We also welcome contributions using a combination of different techniques (physically based models or artificial intelligence techniques) or a combination of different sources of data (remote sensing and in situ) and different imagery types (satellite, airborne, drone). Finally, we wish to attract presentations on developments of user-friendly platforms (as open as possible), providing smooth access to remote sensing data for water applications.
We are particularly interested in applications of remote sensing to determine the human water interactions and the climate change impacts on the whole water cycle (including the inland and coastal links).
The socio-economic impacts associated with floods are increasing. Floods represent the most frequent and most impacting, in terms of the number of people affected, among the weather-related disasters: nearly 0.8 billion people were affected by inundations in the last decade, while the overall economic damage is estimated to be more than $300 billion.
In this context, remote sensing represents a valuable source of data and observations that may alleviate the decline in field surveys and gauging stations, especially in remote areas and developing countries. The implementation of remotely-sensed variables (such as digital elevation model, river width, flood extent, water level, flow velocities, land cover, etc.) in hydraulic modelling promises to considerably improve our process understanding and prediction. During the last decades, an increasing amount of research has been undertaken to better exploit the potential of current and future satellite observations, from both government-funded and commercial missions, as well as many datasets from airborne sensors carried on airplanes and drones. In particular, in recent years, the scientific community has shown how remotely sensed variables have the potential to play a key role in the calibration and validation of hydraulic models, as well as provide a breakthrough in real-time flood monitoring applications. With the proliferation of open data and models in Earth observation with higher data volumes than ever before, combined with the exponential growth in deep learning, this progress is expected to rapidly increase.
We invite presentations related to flood monitoring and mapping through remotely sensed data including but not limited to:
- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions as well as airborne sensors (aircraft and drones);
- Remote sensing techniques to monitor flood dynamics;
- The use of remotely sensed data for the calibration, or validation, of hydrological or hydraulic models;
- Data assimilation of remotely sensed data into hydrological and hydraulic models;
- Improvement of river discretization and monitoring based on Earth observations;
- River flow estimation from remote sensing;
- Deep learning based flood monitoring or prediction
Early career and underrepresented scientists are particularly encouraged to participate.
Public information:
The session is structured into two blocks, with exciting discussions planned at the end of each block, to allow the audience to engage fully with our excellent speaker lineup.
Landslides and slope instabilities are natural hazards that cause significant damage and loss of life around the world each year. Yet, their triggering mechanisms and failure dynamics are still an open area of research. Landslide-prone areas are characterized by highly heterogeneous properties and subsurface dynamics, where, for example, changes in the fluid pathways, geomechanical parameters, and subsurface structures can occur over time-scales ranging from second/minutes to months/years, requiring radically different approaches for their identification and prediction. Hence, there is a need to develop a suite of novel methods for studying landslide and slope instabilities architecture as well as their temporal and spatial changes in internal structure and properties. The complexity of the problem requires the application of innovative research methods in data acquisition, methodology and the integrated interpretation of geophysical, geotechnical and geological data.
This session invites abstracts presenting novel and emerging trends and opportunities for landslide and slope instabilities reconnaissance, monitoring, and early-warning, particularly applying multi-method approaches. Presentations showing the integration of various geophysical and remote sensing techniques, especially using machine learning or time-lapse surveys, are especially welcome. Likewise, presentations focusing on determining the geomechanical parameters of mass movements using geological (boreholes, geomechanical or other surveys) and geophysical studies are also in the scope of the session.
We invite presentations that investigate mass movements using seismic and/or infrasound techniques as well as other geophysical or remote-sensing techniques. Topics of interest include source detection, location, characterization, modeling, and classification; precursory signal analysis; monitoring; innovative instrumentation; and hazard mitigation.
Solicited authors:
Adrian Flores Orozco,Mirko Pavoni,Małgorzata Chmiel
Effective and enhanced hydrological monitoring is essential for understanding water-related processes in our rapidly changing world. Image-based river monitoring has proven to be a powerful tool, significantly improving data collection, analysis, and accuracy, while supporting timely decision-making. The integration of remote and proximal sensing technologies with citizen science and artificial intelligence has the potential to revolutionize monitoring practices. To advance this field, it is vital to assess the quality of current research and ongoing initiatives, identifying future trajectories for continued innovation.
We invite submissions focused on hydrological monitoring utilizing advanced technologies, such as remote sensing, AI, machine learning, Unmanned Aerial Systems (UAS), and various camera systems, in combination with citizen science. Topics of interest include, but are not limited to:
• Disruptive and Innovative sensors and technologies in hydrology.
• Advancing opportunistic sensing strategies in hydrology.
• Automated and semi-automated methods.
• Extraction and processing of water quality and river health parameters (e.g., turbidity, plastic transport, water depth, flow velocity).
• New approaches to long-term river monitoring (e.g., citizen science, camera systems—RGB/multispectral/hyperspectral, sensors, image processing, machine learning, data fusion).
• Innovative citizen science and crowd-based methods for monitoring hydrological extremes.
• Novel strategies to enhance the detail and accuracy of observations in remote areas or specific contexts.
The goal of this session is to bring together scientists working to advance hydrological monitoring, fostering a discussion on how to scale these innovations to larger applications.
This session is co-sponsored by MOXXI, the working group on novel observational methods of the IAHS.
The Critical Zone (CZ), the Earth's outer layer extending from the top of the vegetation canopy to the bottom of circulating groundwater, is essential for sustaining life, supporting ecosystems, and maintaining environmental health. Understanding this complex system requires collaborative, multidisciplinary approaches that integrate diverse perspectives and innovative methodologies, involving observations, modelling, and integration of the two. This session will highlight advances in understanding the interplay between soils, hydrology, and biogeochemical cycling, as well as the complex interactions between groundwater flow and other CZ components at different spatial scales. Drawing on data from established CZ observatories and networks, it will illustrate how diverse climates, geological settings, vegetation, and land-use practices influence groundwater processes and CZ evolution. Particular emphasis will be placed on the value of international collaboration and the importance of understanding how rapid surface processes interact with the slower dynamics of groundwater to collectively shape the CZ over time. Discussions will also address the potential impacts of climate change, extreme weather events, and wildfires on groundwater recharge, discharge patterns, and water quality. The overall goal is to enhance appreciation of collaborative research approaches and emphasise groundwater’s central role in CZ functioning.
Sitting under a tree, you feel the spark of an idea, and suddenly everything falls into place. The following days and tests confirm: you have made a magnificent discovery — so the classical story of scientific genius goes…
But science as a human activity is error-prone, and might be more adequately described as "trial and error", or as a process of successful "tinkering" (Knorr, 1979). Thus we want to turn the story around, and ask you to share 1) those ideas that seemed magnificent but turned out not to be, and 2) the errors, bugs, and mistakes in your work that made the scientific road bumpy. What ideas were torn down or did not work, and what concepts survived in the ashes or were robust despite errors? We explicitly solicit Blunders, Unexpected Glitches, and Surprises (BUGS) from modeling and field or lab experiments and from all disciplines of the Geosciences.
Handling mistakes and setbacks is a key skill of scientists. Yet, we publish only those parts of our research that did work. That is also because a study may have better chances to be accepted for publication in the scientific literature if it confirms an accepted theory or if it reaches a positive result (publication bias). Conversely, the cases that fail in their test of a new method or idea often end up in a drawer (which is why publication bias is also sometimes called the "file drawer effect"). This is potentially a waste of time and resources within our community as other scientists may set about testing the same idea or model setup without being aware of previous failed attempts.
In the spirit of open science, we want to bring the BUGS out of the drawers and into the spotlight. In a friendly atmosphere, we will learn from each others' mistakes, understand the impact of errors and abandoned paths onto our work, and generate new insights for our science or scientific practice.
Here are some ideas for contributions that we would love to see:
- Ideas that sounded good at first, but turned out to not work.
- Results that presented themselves as great in the first place but turned out to be caused by a bug or measurement error.
- Errors and slip-ups that resulted in insights.
- Failed experiments and negative results.
- Obstacles and dead ends you found and would like to warn others about.
--
Knorr, Karin D. “Tinkering toward Success: Prelude to a Theory of Scientific Practice.” Theory and Society 8, no. 3 (1979): 347–76.
Solicited authors:
Jan Seibert
Co-organized by BG0/EMRP1/ESSI4/GD10/GI1/GI6/GM11/GMPV1/PS0/SM2/SSS11/ST4
Visualisation of scientific data is an integral part of scientific understanding and communication. Scientists have to make decisions about the most effective way to communicate their results every day. How do we best visualise the data to understand it ourselves? How do we best visualise our results to communicate with others? Common pitfalls can be overcrowding, overcomplicated or suboptimal plot types, or inaccessible colour schemes. Scientists may also get overwhelmed by the graphics requirements of different publishers, for presentations, posters, etc. This short course is designed to help scientists improve their data visualisation skills so that the research outputs would be more accessible within their own scientific community and reach a wider audience.
Topics discussed include:
- golden rules of DataViz;
- choosing the most appropriate plot type and designing a good DataViz;
- graphical elements, fonts and layout;
- colour schemes, accessibility and inclusiveness;
- creativity vs simplicity – finding the right balance;
- figures for scientific journals (graphical requirements, rights and permissions);
- tools for effective data visualisation.
This course is co-organized by the Young Hydrologic Society (YHS), enabling networking and skill enhancement of early career researchers worldwide. Our goal is to help you make your figures more accessible to a wider audience, informative and beautiful. If you feel your graphs could be improved, we welcome you to join this short course.
Environmental DNA (eDNA) metabarcoding is a noninvasive method to detect biodiversity in a variety of environments that has many exciting applications for geosciences. In this short course, we introduce eDNA metabarcoding to a geoscience audience and present potential research applications.
During the past 75 years, radiocarbon dating has been applied across a wide range of disciplines, including, e.g. archaeology, geology, hydrology, geophysics, atmospheric science, oceanography, and paleoclimatology, to name but a few. Radiocarbon analysis is extensively used in environmental research as a chronometer (geochronology) or as a tracer for carbon sources and natural pathways. In the last two decades, advances in accelerator mass spectrometry (AMS) have enabled the analysis of very small quantities, as small as tens of micrograms of carbon. This has opened new possibilities, such as dating specific compounds (biomarkers) in sediments and soils. Other innovative applications include distinguishing between old (fossil) and natural (biogenic) carbon or detecting illegal trafficking of wildlife products such as ivory, tortoiseshells, and fur skins. Despite the wide range of applications, archives, and systems studied with the help of radiocarbon dating, the method has a standard workflow, starting from sampling through the preparation and analysis, arriving at the final data that require potential reservoir corrections and calibration.
This short course will provide an overview of radiocarbon dating, highlighting the state-of-the-art methods and their potential in environmental research, particularly in paleoclimatology. After a brief introduction to the method, participants will delve into practical examples of its application in the study of past climates, focusing on the 14C method and how we arrive at the radiocarbon age.
Applications in paleoclimate research and other environmental fields
Sampling and preparation
Calibration programs
We strongly encourage discussions around radiocarbon research and will actively address problems related to sampling and calibration. This collaborative approach will enhance the understanding and application of radiocarbon dating in the respective fields.
ESSI5 – Open Session - Advances in Earth and Space Science Informatics
Sub-Programme Group Scientific Officers:
Jens Klump,
Kaylin Bugbee,
Alba Brobia
Please decide on your access
Please use the buttons below to download the or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.
We are sorry, but presentations are only available for conference attendees. Please register for the conference first. Thank you.
Please decide on your access
Please use the buttons below to download the or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.