In an era where environmental challenges are increasingly complex, the integration of artificial intelligence (AI) into data-driven approaches is transforming how we understand and address these issues. This session aims to bring together professionals from national and regional agencies across Europe who are leveraging AI technologies to enhance environmental research and policy-making.
We invite participants to share their experiences, case studies, and innovative applications of AI in environmental monitoring, data analysis, and policy development. A key focus will be on how to build the necessary infrastructures and frameworks that facilitate the effective implementation of AI applications to support policy-making processes.
Key topics may include:
- Developing robust data infrastructures for AI integration
- AI applications in real-time environmental monitoring
- Creating collaborative frameworks for sharing AI-driven insights across agencies
- Strategies for overcoming challenges in implementing AI technologies in environmental contexts
- The role of AI in data-driven policy consulting and its impact on sustainability
By fostering interdisciplinary dialogue and collaboration, this session aims to identify best practices, explore new opportunities, and enhance the collective capacity of European agencies to address pressing environmental challenges through the power of AI. Join us in shaping the future of environmental policy and research in Europe!
Modern challenges of climate change, disaster management, public health and safety, resources management, and logistics can only be effectively addressed through big data analytics. Advances in technology are generating vast amounts of geospatial data on local and global scales. The integration of artificial intelligence (AI) and machine learning (ML) has become crucial in analysing these datasets, leading to the creation of various maps and models within the various fields of geosciences. Recent studies, however, highlight significant challenges when applying ML and AI to spatial and spatio-temporal data along the entire modelling pipeline, including reliable accuracy assessment, model interpretation, transferability, and uncertainty assessment. This gap has been recognised and led to the development of new spatio-temporally aware strategies and methods in response to the promise of improving spatio-temporal predictions, the treatment of the cascade of uncertainties, decision making and facilitating communication.
This session discusses challenges and advances in spatial and spatio-temporal machine learning methods and the software and infrastructures to support them.
Since the breakthrough of datacubes as a contributor to Analysis-Ready Data, a series of implementations have been announced, and likewise services. However, often these are described through publications only and without publicly accessible deployments to evaluate.
In this session, hands-on demos are mandatory. Speakers must spend maximum 50% of their time on presenting slides etc., and minimum 50% to live demos of their tools. To guarantee fair and equal conditions, only in-person presentation will be allowed. Presenters are invited to share their latest and greatest functionality and data, but must balance this with their confidence that the demo will work out of the box.
This enables the audience to see first-hand what datacube features are effectively implemented, how stable they are under strong timing conditions, and what their real-life performance. The expected outcome for attendees is to get a realistic overview on the datacube tools and service landscape, and to assess how much each tool supports their individual needs, such as analysis-readiness.
This session features live demos of datacube tools and services, rigorously focusing on "what really works". The audience sees first-hand what features are effectively implemented, their stability, and speed, giving an impression of usefulness and maturity.
This session explores the transformative potential of large language models (LLMs) in geosciences. LLMs are revolutionising the field by enabling researchers to process and interpret complex geological, climatological, environmental, hydrological and other earth systems data with unprecedented speed and accuracy, leading to new discoveries and insights. Presenters are encouraged to share how LLMs have accelerated research by analysing vast datasets, automating data interpretation, and uncovering hidden patterns. Case studies highlighting breakthroughs in geology, climate science, and environmental monitoring are particularly welcome. This session will provide a platform for geoscientists to discuss the integration of LLMs into their workflows, enhancing both efficiency and discovery while addressing challenges such as model accuracy and data bias. We invite presentations that explore the transformative potential of large language models (LLMs) in the geosciences. Join us in contributing to this cutting-edge dialogue and helping shape the future of geosciences through AI.
Recent breakthroughs in machine learning, notably deep learning, that facilitate massive amounts of data with data-driven AI models have led to an unprecedented potential for large-scale environmental monitoring through remote sensing. Despite the success of existing deep learning-based approaches in remote sensing for many applications, their shortcomings in jointly leveraging various aspects of Earth observation data prevent fully exploiting the potential of remote sensing for the environment. Namely, integrating multiple data modalities and remote sensing sensors, leveraging deep learning methods over multi-spatial/spectral resolution Earth observation data, and modeling space and temporality together offer remarkable opportunities for a comprehensive and accurate understanding of the environment. Throughout this session, we aim to gather the community to delve into the latest scientific advances that leverage these multi-dimensional approaches to tackle pressing environmental challenges.
Earth, its weather and climate constitute a complex system whose monitoring and modelling has undergone remarkable progress in recent years. In particular, enhanced spaceborne observations and the integration Machine/Deep Learning (ML/DL) techniques are key drivers of innovation in Earth System Observation and Prediction (ESOP) for Weather and Climate.
ML/DL techniques revolutionized numerous fields and have proven advantageous in various applications. These techniques garnered significant attention and adoption within the ESOP community due to their ability to enhance our understanding and prediction capabilities of the Earth's complex dynamics. One prominent area where ML/DL techniques have proven invaluable is in the development of high fidelity digital models of the Earth on a global scale. These models serve as comprehensive monitoring, simulation, and prediction systems that enable us to analyse and forecast the intricate interactions between natural phenomena and human activities.
ML/DL solutions also showcased promising advancements in weather forecasting and climate prediction. Algorithms can be trained to identify instances where physical models may exhibit inaccuracies and subsequently learn to correct their predictions accordingly. Moreover, AI-based models have the potential to create hybrid forecast models that combine the strengths of traditional, physics-based NWP/Climate prediction methodologies with the capabilities of ML/DL, ultimately enhancing the accuracy and reliability of predictions.As such, the application of ML/DL for ESOP and the hybrid usage in combination with established numerical prediction and assimilation methods are thematic areas of particular interest in this session.
Inspired by the successful 4-days long ESA-ECMWF ML4ESOP workshop, this sprint session invites new ML4ESOP explorers to present their latest innovation in ESOP. Focus is on the exploration of new data sources and benchmarks for weather and climate modelling, the adaptation of large-scale data-driven Earth system models, as well as novel demonstrations of their applicability to weather and climate observation and prediction.
This session invites all experts from diverse fields to discuss how recent advances innovate on established ESOP approaches, to address current challenges, and to identify opportunities for future work.
Large-scale machine learning models, for example FourCastNet, Pangu-Weather, GraphCast and ECMWF’s AIFS, are currently transforming weather forecasting and are also reshaping the weather and climate research landscape. The first machine learning models that can be applied to climate change scenarios are also being developed. This session will bring together developers and users from research and operations, from academia as well as private enterprises, to discuss the current state of the art and future developments in the field. We welcome contributions from machine learning model developers, consortia and users of single component machine learning models, such as those focused on the atmosphere, ocean or sea ice, as well as coupled models that consider the entire Earth system. Foundation models, which learn more general representations of the Earth system, and studies on the combination of large-scale machine learning models and traditional physics-based solvers are also welcome. Of particular interest are also contributions on verification methods, out-of-distribution experiments, real or idealized case studies across different scales (e.g. air pollution, solar energy production, extratropical cyclones, ocean dynamics) as well as contributions with a focus on the physical consistency of such machine learning models. The session aims to facilitate dialogue between researchers interested in scientific discovery and developers interested in novel machine learning ideas pertinent to the domain, e.g. spatio-temporal diffusion models, variational autoencoders and novel training protocols.
Foundation models (FMs) have shown great promise for image and text-based tasks, including applications to Earth Observation (EO) satellite imagery. However, with more and more models being published, model inter-comparison is becoming increasingly challenging. This session seeks to highlight works focussed on benchmarking and best practices for using pre-trained models for diverse tasks across EO, Weather and Climate research. We invite submissions focussed on creating geospatially-aware FMs, e.g. multi-modality, multi-temporality, multi-resolution and sensor independence, that enable new scientific understanding within their respective research domains.
To foster discussions on current applications, challenges and opportunities of FMs for EO, Weather and Climate, we encourage submissions from AI and domain researchers, climate modellers, industry experts and stakeholders from the AI4EO, High-Performance Computing (HPC) and Big Data communities.
The topics of the session are:
Benchmarking and Evaluating FMs: Establishing standardised fair evaluation metrics and benchmarks to assess the performance and capabilities of FMs in processing data, ensuring reliability and efficiency.
Best Practices: Guidelines for using existing pre-trained models for diverse applications, with a specific focus on how to decide which models are best for certain use cases.
Sensor independence: FMs can process data from various sensors, including multi- or hyper-spectral, SAR, Very High Resolution (VHR) satellite data and Earth System models, enabling comprehensive analysis of the Earth's dynamics holistically.
Multi-modality and multi-temporal models: FMs can adeptly handle diverse data modalities such as text, video, imagery and time-series, offering new approaches to data analysis, processing and interpretation, as well as to change detection capability.
Scientific understanding: In addition to understanding how to best create general-purpose models, it is of utmost importance to highlight what scientific insights are enabled through the creation of FMs. In particular, insights in relation to physical principles, spectral response, temporal and spatial performance and causality (e.g. in the areas of cloud dynamics, hydrology and oceanography) are strongly invited.
Implications of FMs for the Community: Understanding the potential societal, environmental and economic impacts of implementing FMs in EO applications, fostering informed decision-making and resource management.
Science Foundation Models (SFM), based on large volumes of data and computing at scale, have already demonstrated their usefulness in science applications. These generalized AI models are designed not just for specific tasks but for a plethora of downstream applications. Trained on any sequence data through self-supervised methods, FMs eliminate the need for extensive labeled datasets. Often leveraging the power of transformer architectures, which utilize self-attention mechanisms, FMs can capture intricate relationships in data across space and time. Their emergent properties, derived from the data, make them invaluable tools for scientific research. When fine-tuned, FMs outperform traditional models, both in efficiency and accuracy, paving the way for rapid development of diverse applications. FMs, with their ability to synthesize vast amounts of data and discern intricate patterns, can revolutionize our understanding of and response to challenging global problems, such as monitoring and mitigating the impacts of climate change and other natural hazards.
The session will discuss advances, early results and best practices related to the preparation and provisioning of curated data, construction and evaluation of model architectures, scaling properties and computational characteristics of model pretraining, use cases and finetuning of downstream applications, and MLops for the deployment of models for research and applications. The session also encourages discussion on broad community involvement toward the development of open foundation models for science that are accessible for all.
Earth Observation Foundation Models (EO-FMs) are large-scale AI models that undergo pre-training on proxy tasks using self-supervised learning to generate versatile data representations. Those representations are referred to as embeddings. Pre-training of EO-FMs is designed to capture the unique spatio-temporal characteristics of remote sensing data and climate model predictions into embeddings. After pre-training, the EO-FM encoder can be adapted for a wide range of downstream applications through fine-tuning with a decoder head. This approach enables high accuracy and generalization across different regions, even when limited labeled data is available.
The most critical and computationally intensive phase in the development of EO-FMs is the pre-training stage. This phase involves several key components:
1. data sampling strategies to ensure diverse and representative datasets,
2. exploration of various proxy tasks—such as spatial-spectral-temporal masking, diffusion-based methods, and contrastive learning—to enhance the model's ability to learn the structure of complex geospatial data,
3. robust scaling law measurement and extrapolation: finding the optimal balance between model size, data size, and computational effort estimation,
4. scalable model training: leveraging distributed systems (supercomputers, cloud environments, etc.) and advanced parallelism techniques for efficient pre-training
Evaluation of pre-trained EO-FMs is critically linked to benchmark embeddings. It remains an open direction of research to develop generic frameworks to test the universal character of Earth observation data embeddings.
This session will focus on sharing insights from extensive pre-training experiments, including ablation studies that examine the impact of different pre-training strategies on model performance. We also invite contributions with practical guidance for training EO-FMs on large-scale GPU clusters. In addition, we would like to discuss scalable training strategies for multi-modal and multi-domain EO-FMs. We aim to foster a deeper understanding of how to optimize pre-training processes for enhanced model effectiveness in various geospatial applications through the use of embeddings.
The recent growing number of probes in the heliosphere and future missions in preparation led to the current decade being labelled as "the golden age of heliophysics research". With more viewpoints and data downstreamed to Earth, machine learning (ML) has become a precious tool for planetary and heliospheric research to process the increasing amount of data and help the discovery and modelisation of physical systems. Recent years have also seen the development of novel approaches leveraging complex data representations with highly parameterised machine learning models and combining them with well-defined and understood physical models. These advancements in ML with physical insights or physically informed neural networks inspire new questions about how each field can respectively help develop the other. To better understand this intersection between data-driven learning approaches and physical models in planetary sciences and heliophysics, we seek to bring ML researchers and physical scientists together as part of this session and stimulate the interaction of both fields by presenting state-of-the-art approaches and cross-disciplinary visions of the field.
The "ML for Planetary Sciences and Heliophysics" session aims to provide an inclusive and cutting-edge space for discussions and exchanges at the intersection of machine learning, planetary and heliophysics topics. This space covers (1) the application of machine learning/deep learning to space research, (2) novel datasets and statistical data analysis methods over large data corpora, and (3) new approaches combining learning-based with physics-based to bring an understanding of the new AI-powered science and the resulting advancements in heliophysics research.
Topics of interest include all aspects of ML and heliophysics, including, but not limited to: space weather forecasting, computer vision systems applied to space data, time-series analysis of dynamical systems, new machine learning models and data-assimilation techniques, and physically informed models.
One of the big challenges in Earth system science consists in providing reliable climate predictions on sub-seasonal, seasonal, decadal and longer timescales. The resulting data have the potential to be translated into climate information leading to a better assessment of global and regional climate-related risks.
The main goals of the session is (i) to identify gaps in current climate prediction methods and (ii) to report and evaluate the latest progress in climate forecasting on subseasonal-to-decadal and longer timescales. This will include presentations and discussions of developments in the predictions for the different time horizons from dynamical ensemble and statistical/empirical forecast systems, as well as the aspects required for their application: forecast quality assessment, multi-model combination, bias adjustment, downscaling, exploration of artificial-intelligence methods, etc.
Following the new WCRP strategic plan for 2019-2029, prediction enhancements are solicited from contributions embracing climate forecasting from an Earth system science perspective. This includes the study of coupled processes between atmosphere, land, ocean, and sea-ice components, as well as the impacts of coupling and feedbacks in physical, hydrological, chemical, biological, and human dimensions. Contributions are also sought on initialization methods that optimally use observations from different Earth system components, on assessing and mitigating the impacts of model errors on skill, and on ensemble methods.
We also encourage contributions on the use of climate predictions for climate impact assessment, demonstrations of end-user value for climate risk applications and climate-change adaptation and the development of early warning systems.
A special focus will be put on the use of operational climate predictions (C3S, NMME, S2S), results from the CMIP5-CMIP6 decadal prediction experiments, and climate-prediction research and application projects.
An increasingly important aspect for climate forecast's applications is the use of most appropriate downscaling methods, based on dynamical, statistical, artificial-intelligence approaches or their combination, that are needed to generate time series and fields with an appropriate spatial or temporal resolution. This is extensively considered in the session, which therefore brings together scientists from all geoscientific disciplines working on the prediction and application problems.
In recent years, technologies based on Artificial Intelligence (AI), such as image processing, smart sensors, and intelligent inversion, have garnered significant attention from researchers in the geosciences community. These technologies offer the promise of transitioning geosciences from qualitative to quantitative analysis, unlocking new insights and capabilities previously thought unattainable.
One of the key reasons for the growing popularity of AI in geosciences is its unparalleled ability to efficiently analyze vast datasets within remarkably short timeframes. This capability empowers scientists and researchers to tackle some of the most intricate and challenging issues in fields like Geophysics, Seismology, Hydrology, Planetary Science, Remote Sensing, and Disaster Risk Reduction.
As we stand on the cusp of a new era in geosciences, the integration of artificial intelligence promises to deliver more accurate estimations, efficient predictions, and innovative solutions. By leveraging algorithms and machine learning, AI empowers geoscientists to uncover intricate patterns and relationships within complex data sources, ultimately advancing our understanding of the Earth's dynamic systems. In essence, artificial intelligence has become an indispensable tool in the pursuit of quantitative precision and deeper insights in the fascinating world of geosciences.
For this reason, aim of this session is to explore new advances and approaches of AI in Geosciences.
The complexity of hydrological and Earth systems poses significant challenges to their prediction and understanding capabilities. The advent of machine learning (ML) provides powerful tools for modeling these complex systems. However, realizing their full potential in this field is not just about algorithms and data, but requires a cooperative interaction between domain knowledge and data-driven power. This session aims to explore the frontier of this convergence and how it facilitates a deeper process understanding of various aspects of hydrological processes and their interactions with the atmosphere and biosphere across spatial and temporal scales.
We invite researchers working in the fields of explainable AI, physics-informed ML, hybrid Earth system modeling (ESM), and AI for causal and equation discovery in hydrology and Earth system sciences to share their methodologies, findings, and insights. Submissions are welcome on topics including, but not limited to:
- Explainability and transparency in ML/AI modeling of hydrological and Earth systems;
- Process and knowledge integration in ML/AI models;
- Data assimilation and hybrid ESM approaches;
- Causal learning and inference in ML models;
- Data-driven equation discovery in hydrological and Earth systems;
- Data-driven process understanding in hydrological and Earth systems;
- Challenges, limitations, and solutions related to hybrid models and XAI.
Deep Learning has seen accelerated adoption across Hydrology and the broader Earth Sciences. This session highlights the continued integration of deep learning and its many variants into traditional and emerging hydrology-related workflows. We welcome abstracts related to novel theory development, new methodologies, or practical applications of deep learning in hydrological modeling and process understanding. This might include, but is not limited to, the following:
(1) Development of novel deep learning models or modeling workflows.
(2) Probing, exploring and improving our understanding of the (internal) states/representations of deep learning models to improve models and/or gain system insights.
(3) Understanding the reliability of deep learning, e.g., under non-stationarity and climate change.
(4) Modeling human behavior and impacts on the hydrological cycle.
(5) Deep Learning approaches for extreme event analysis, detection, and mitigation.
(6) Natural Language Processing in support of models and/or modeling workflows.
(7) Applications of Large Language Models and Large Multimodal Models (e.g. ChatGPT, Gemini, etc.) in the context of hydrology.
(8) Uncertainty estimation for and with Deep Learning.
(9) Advances towards foundational models in the context of hydrology and Earth Sciences more generally.
(10) Exploration of different training strategies, such as self-supervised learning, unsupervised learning, and reinforcement learning.
Proper characterization of uncertainty remains a major research and operational challenge in Environmental Sciences and is inherent to many aspects of modelling impacting model structure development; parameter estimation; an adequate representation of the data (inputs data and data used to evaluate the models); initial and boundary conditions; and hypothesis testing. To address this challenge, methods that have proved to be very helpful include a) uncertainty analysis (UA) that seek to identify, quantify and reduce the different sources of uncertainty, as well as propagating them through a system/model, and b) the closely-related methods for sensitivity analysis (SA) that evaluate the role and significance of uncertain factors (in the functioning of systems/models).
This session invites contributions that discuss advances, both in theory and/or application, in methods for SA/UA applicable to all Earth and Environmental Systems Models (EESMs), which embraces all areas of hydrology, such as classical hydrology, subsurface hydrology and soil science.
Topics of interest include (but are not limited to):
1) Novel methods for effective characterization of sensitivity and uncertainty
2) Analyses of over-parameterised models enabled by AI/ML techniques
3) Single- versus multi-criteria SA/UA
4) Novel methods for spatial and temporal evaluation/analysis of models
5) The role of information and error on SA/UA (e.g., input/output data error, model structure error, parametric error, regionalization error in environments with no data etc.)
6) The role of SA in evaluating model consistency and reliability
7) Novel approaches and benchmarking efforts for parameter estimation
8) Improving the computational efficiency of SA/UA (efficient sampling, surrogate modelling, model identification and selection, model diagnostics, parallel computing, model pre-emption, model ensembles, etc.)
9) Methods for detecting and characterizing model inadequacy
In many instances, geoscientific applications face with sharp interfaces, such as fractures, faults, bedding-planes, mineral dissolution/formation or phase-changes. These sharp interfaces often pose challenges in numerical modeling as they may require conforming grid or generate discontinuities in solution search. To mitigate the challenges, phase-field modeling has emerged as a valuable tool, which provides a continuous representation of discontinuities, circumventing the challenges of tracking sharp interfaces. This approach offers improved predictions and insights into geo-material behavior across various scenarios. Given their increasing popularity, we are hosting a session dedicated to phase-field modeling in geoscientific applications such as fracturing, mineral dissolution or precipitation, and induced seismicity in geological carbon storage.
We welcome contributions:
- to present recent developments in phase-field modeling in geoscientific applications, with focusing on techniques for improving modeling and computational efficiency,
- to address challenges ranging from microscale to field-scale applications, and potential solutions for overcoming the challenges,
- to encourage interactions between experimentalists and/or modelers for enhancing the real-world application of phase-field modeling in geoscientific research,
- to share your insights on how AI can enhance phase-field modeling in geoscientific applications by addressing computational challenges inherent in this method.
We look forward to the insights and contributions of scientists, researchers, and practitioners working with phase-field modeling in geoscientific contexts in order to increase the discussion on findings, methodologies, and challenges, especially those encountered at field-scale scenarios.
In this short course we will address the increasing role of artificial intelligence (AI) in geoscientific research, guiding participants through the various stages of the research process where AI tools can be effectively implemented, however with responsibility. We will explore freely available AI tools that can be used for data analysis, model development, and research publication. Additionally, the course aims to provoke reflections on the ethical implications of AI use, addressing concerns such as data bias, transparency, and the potential for misuse. Participants will engage in interactive discussions to explore what constitutes responsible and acceptable use of AI in geoscientific research, aiming to establish a set of best practices for integrating AI into scientific workflows.
Model Land is a conceptual place within the boundaries of a model. When we confuse a Model Land for the real world, we can be ignorant to the assumptions, limitations, uncertainties, and biases inherent in our models. These things need to be carefully understood and considered before we use models to inform decisions about the real world and by doing so we can escape from our Model Lands (Thompson, 2019).
However, in order to escape, we need to explore our Model Lands, mapping them and developing a deeper understanding of their rules and boundaries. In this short course we will present a framework inspired by tabletop roleplay games (TTRPGs) that will bring Model Lands to life. Either using your own model or one of our examples you will learn how to build a world that follows its rules, how to investigate what it would be like to exist within that world, and how to share with others what you have learnt.
Please bring along a pen and paper and be prepared to share your Model Lands. We want to encourage creative expression, so if you have a flair for drawing, poetry, games design, or interpretive dance, feel free to bring along the means to share your creations through whatever medium you prefer.
Data assimilation (DA) is widely used in the study of the atmosphere, the ocean, the land surface, hydrological processes, etc. The powerful technique combines prior information from numerical model simulations with observations to provide a better estimate of the state of the system than either the data or the model alone. This short course will introduce participants to the basics of data assimilation, including the theory and its applications to various disciplines of geoscience. An interactive hands-on example of building a data assimilation system based on a simple numerical model will be given. This will prepare participants to build a data assimilation system for their own numerical models at a later stage after the course.
In summary, the short course introduces the following topics:
(1) DA theory, including basic concepts and selected methodologies.
(2) Examples of DA applications in various geoscience fields.
(3) Hands-on exercise in applying data assimilation to an example numerical model using open-source software.
This short course is aimed at people who are interested in data assimilation but do not necessarily have experience in data assimilation, in particular early career scientists (BSc, MSc, PhD students and postdocs) and people who are new to data assimilation.
Over the last decade, a flurry of machine learning methods has led to novel insights throughout geophysics. As wide as the applications are the data types processed, including environmental parameters, GNSS, InSAR, infrasound, and seismic data, but also downstream structured data products such as 3D data cubes, earthquake catalogs, seismic velocity changes. Countless methods have been proposed and successfully applied, ranging from traditional techniques to recent deep learning models. At the same time, we are increasingly seeing the adoption of machine learning techniques in the wider geophysics community, driven by continuously growing data archives, accessible codes, and software. Yet, the landscape of available methods and data types is difficult to navigate, even for experienced researchers.
In this session, we want to bring together machine learning researchers and practitioners throughout the domains of geophysics. We aim to identify common challenges connecting different tasks and data types and formats, and outline best practices for the development and use of machine learning. We also want to discuss how recent trends in machine learning, such as foundation models, the shift to multimodality, or physics informed models may impact geophysical research. We welcome contributions from all fields of geophysics, covering a wide range of data types and machine learning techniques. We also encourage contributions for machine learning adjacent tasks, such as big-data management, data visualization, or software development in the field of machine learning.
Assessing the spatial heterogeneity of environmental variables is a challenging problem in real applications when numerical approaches are needed. This is made more difficult by the complexity of Natural Phenomena, which are characterized by (Chiles and Delfiner, 2012):
- being unknown: their knowledge is often incomplete, derived from limited and sparse samples;
- dimensionality: they can be represented in two- or three-dimensional domains;
- complexity: deterministic interpolators (i.e., Inverse Distance Weighted) may fail in providing exhaustive spatial distribution models, as they do not consider uncertainty;
- uniqueness: invoking a probabilistic approach, they can be assumed as a realization of a random process and described by regionalized variables.
Geostatistics provides optimal solutions to this issue, offering tools to accurately predict values and uncertainty in unknown locations while accounting for the spatial correlation of samples.
The course will address theoretical and practical methods for evaluating data heterogeneity in computational domains, exploiting the interplay between geometry processing, geostatistics, and stochastic approaches. It will be mainly split into 4 parts, as follows:
- Theoretical Overview: Introduction to Random Function Theory and Measures of Spatial Variability
- Modeling Spatial Dependence: An automatic solution to detect both isotropic and anisotropic spatial correlation structures
- The role of Unstructured Meshes: Exploration of flexible, robust, and adaptive geometric modeling, coupled with stochastic simulation algorithms
- Filling the Mesh: Developing a compact and tangible spatial model, that incorporates all alternative realizations, statistics, and uncertainty
The course will offer a comprehensive understanding of key steps to create a spatial predictive model with geostatistics. We will also promote MUSE (Modeling Uncertainty as a Support for Environments) (Miola et al., STAG2022) as an innovative and user-friendly open-source software, that implements the entire methodology. Tips on how to use MUSE will be provided, along with explanations of its structure and executable commands. Impactful examples will be used to show the effectiveness of geostatistical modeling with MUSE and the flexibility to use it in different scenarios, varying from geology to geochemistry.
The course is designed for everyone interested in geostatistics and spatial distribution models, regardless of their prior experience.
Since the breakthrough of datacubes as a contributor to Analysis-Ready Data, a series of implementations have been announced, and likewise services. However, often these are described through publications only.
In this session, hands-on demos are mandatory. Speakers must spend maximum 50% of their time on presenting slides etc., and minimum 50% to live demos of their tools. To guarantee fair and equal conditions, only in-person presentation will be allowed. Presenters are invited to share their latest and greatest functionality and data, but must balance this with their confidence that the demo will work out of the box.
This enables the audience to see first-hand what datacube features are effectively implemented, how stable they are under strong timing conditions, and what their real-life performance. The expected outcome for attendees is to get a realistic overview on the datacube tools and service landscape, and to assess how much each tool supports their individual needs, such as analysis-readiness.
In a changing climate world, extreme weather and climate events have become more frequent and severe, and are expected to continue increasing in this century and beyond. Unprecedented extremes in temperature, heavy precipitation, droughts, storms, river floodings and related hot and dry compound events have increased over the last decades, impacting negatively broad socio-economic spheres (such as agriculture), producing several damages to infrastructure, but also putting in risk human well-being, to name but a few. The above have raised many concerns in our society and within the scientific community about our current climate but our projected future. Thus, a better understanding of the climate and the possible changes we will face, is strongly needed. . In order to give answers to those questions, and address a wide range of uncertainties, very large data volumes are needed across different spatial (from local-regional to global) and temporal scales (past, current, future), but sources are multiple (observations, satellite, models, reanalysis, etc), and their resolution may vary each other. To deal with huge amounts of information, and take advantage of their different resolution and properties, high-computational techniques within Artificial Intelligence models are explored in climate and weather research. In this short-course, a novel method using Deep Learning models to detect and characterize extreme weather and climate events will be presented. This method can be applied to several types of extreme events, but a first implementation on which we will focus in the short-course, is its ability to detect past heatwaves. Discussions will take place on the method, and also its applicability to different types of extreme events. The course will be developed in python, but we encourage the climate and weather community to join the short-course and the discussion!
Machine Learning (ML) is on the rise as a tool for cryospheric sciences. It has been used to label, cluster, and segment cryospheric components, as well as emulate, project, and downscale cryospheric processes. To date, the cryospheric community mainly adapts and develops ML approaches for highly specific domain tasks. However, different cryospheric tasks can face similar challenges, and when an ML method addresses one problem, it might be transferable to others. Thus, we invite the community to share their current work and identify potential shared challenges and tasks. We invite contributions across the cryospheric domain, including snow, permafrost, glaciers, ice sheets, and sea ice. We especially call for submissions that use novel machine learning techniques; however, we welcome all ML approaches, ranging from random forests to deep learning. Other contributions, such as datasets, theoretical research, and community-building efforts, are also welcome. By identifying shared challenges and transferring knowledge, we aim to channel resources and increase the impact of ML as a tool to observe, assess, and model the cryosphere.
NFDI4Earth recognises the crucial role of data centers in bridging disciplinary divides within Earth System Sciences (ESS) and beyond. This session, proposed by co-applicants of the NFDI4Earth (www.nfdi4earth.de), aims to leverage the ITS program group's focus on interdisciplinary and transdisciplinary approaches. Contributions from both data center providers and researchers to explore data center challenges in interoperability and opportunities within ESS are welcome.
The session will explore how data centers can facilitate collaboration and address complex challenges by:
Integrating Disciplines: Explore interdisciplinary data fusion and stakeholder engagement for data management.
Addressing Socially Relevant Problems: Address how data centers can aid transdisciplinary research on sustainability challenges and utilize data from public authorities for interdisciplinary ESS research and public engagement.
We encourage proposals from researchers and data center experts addressing:
I) Innovative Data Fusion: How can data centers seamlessly integrate diverse ESS data to tackle societal challenges?
II) Engaging Communities: How can data centers integrate stakeholder knowledge (academia, policymakers, public) for problem-solving?
III) Sustainability & Public Engagement: What opportunities exist for data centers to contribute to transdisciplinary research on sustainability challenges (e.g., climate change and impacts, biodiversity loss and safeguarding food security) and foster public engagement with ESS issues through citizen science data?
IV) Operation & Sustainability Models: How can we manage the diversity of ESS data centers and foster cooperation and interoperability?
Expected Outcomes:
Identify diverse interdisciplinary data management approaches in ESS.
Recommend enhanced collaboration between data centers, researchers, and stakeholders.
Develop strategies for leveraging data centers to support transdisciplinary research addressing societal challenges.
Create a roadmap for integrating stakeholder needs into NFD4Earth data center practices.
In environmental and climate science, solving complex challenges requires holistic approaches, emphasising the need for effective data integration and interoperability. While these terms may initially sound like buzzwords, they become crucial once a measurement is digitised, and physical phenomena are translated into models, which are eventually encoded into software. As environmental and climate scientists, our expertise is essential in addressing ecological challenges and mitigating climate-related risks. This responsibility extends to our engagement with research- and e-infrastructures that support our excellent science. Just as humans communicate effectively to share insights, machines must seamlessly exchange data. In the era of big data, scientific research demands increasingly advanced methodologies and machine-to-machine (M2M) actionable data and services. Data repositories, High-Performance Computing (HPC) facilities, cloud service providers and other infrastructures empower researchers with more tools and (FAIR) technical solutions to drive scientific process. Currently in Europe a wealth of operational infrastructures such as the Environmental Research Infrastructures (ENVRIs), the European Open Science Cloud (EOSC), and e-Infrastructures like EGI, offer these services, contributing to scientific progress.
This session aims to gather real-world examples from environmental and climate research across multi- and interdisciplinary domains (atmosphere, marine, ecosystems, solid earth), data product developers, data scientists and engineers. You will demonstrate research outcomes, showcase research or scientific development projects, discuss challenges, and propose best practices with successful infrastructure support. We welcome contributions from data-driven research, presenting aspects of data analytics, visualisation, data collection and quality control. We also invite use cases of interoperable infrastructures and enhanced collaborations with cloud services. Experiences from Virtual Access and/or Transnational Access programs, and innovative projects embracing the transformative potential of digital twins are also welcome. Join us as we explore how seamless data sharing and integration are accelerating climate science, enabling faster, more accurate models and solutions to pressing global environmental challenges.
Nowadays, sensors, surveys and lab experiments are producing increasingly large quantities of data, many tools are available to elaborate and analyse them in often fragmented stand-alone systems that may hinder collaboration and comprehensive understanding.
e-Infrastructures and Virtual Research Environments (VREs) permits researchers located in different places world-wide to cooperate in research activities, national and international projects from their home institutions. They rely on e-Infrastructures enabling collaborations among researchers providing shared access to unique or distributed scientific facilities including data, instruments, computing and communications. VREs are revolutionising the way research is conducted by providing a cohesive ecosystem where researchers can manage the entire research lifecycle—from data collection and analysis to publication and sharing in the spirit of the Open Science principles and guaranteeing multi-disciplinary approaches.
This session aims to bring together case studies and innovative approaches from the different domains of the earth sciences to stimulate discussion in this multi-disciplinary applied research field. We seek for contributions from all disciplines of the earth sciences that faced the different aspects related to e-infrastructures and VREs, ranging from the implementation from an IT point of view to software implementation, used and collected data, analysis tools implementation, but also policies for e-infrastructure utilisation, highlighting best practices and lessons learned.
Overview
Cloud computing has emerged as the predominant paradigm, underpinning nearly all industrial applications and a significant portion of academic and research projects. Since its inception and widespread adoption, migrating to cloud computing has posed substantial challenges for numerous organizations and enterprises. Leveraging cloud technologies to process big data near their physical locations represents an ideal use case. These cloud resources provide the necessary infrastructure and tools, especially when combined with high-performance computing (HPC) capabilities. The integration of GPUs and other pervasive technologies—such as application containerization and microservice architecture—across public and private cloud infrastructures further supports computation-intensive AI/ML workloads that used to reside only within HPC environments.
Session Focus
This session focuses on use cases involving both Cloud and HPC computing. The goal is to assess the current landscape and outline the steps needed to facilitate the broader adoption of cloud computing in Earth Observation and Earth Modeling data processing. We invite contributions that explore various cloud computing initiatives within these domains, including but not limited to:
Big Data Infrastructures and Platforms: Case studies, techniques, models, and algorithms for data processing on the cloud.
Cloud Federations and Interoperability: Scalability and interoperability across different domains, multi-provenance data management, security, privacy, and green and sustainable computing practices.
Cloud Applications, Infrastructure, and Platforms: IaaS, PaaS, SaaS, and XaaS solutions.
Cloud-Native AI/ML Frameworks: Tools and frameworks for processing data using AI and ML on the cloud.
Cloud Storage and File Systems: Solutions for big data storage and management.
Operational Systems and Services: Deployment and management of operational systems on the cloud.
Data Lakes and Warehouses: Implementation and management of data lakes and warehouses on cloud platforms.
Convergence of Cloud Computing and HPC: Workload unification for Earth Observation (EO) data processing between cloud and HPC
We encourage researchers, practitioners, and industry experts to share their insights, case studies, and innovative solutions that promote the integration of cloud computing and HPC in Earth Observation and Earth Modeling.
Data plays a crucial role in driving innovation and making informed decisions. The European strategy for data is a vision of data spaces that foster creativity and open data, while prioritizing personal data protection. The sheer quantity of Earth Observation (EO) data available has huge potential, but is also challenging to handle with traditional workflows and infrastructures. Extracting full value from datasets provided by government, international organizations, and the private sector require novel approaches to data access.
New data spaces are being developed in Europe, such as the Copernicus Data Space Ecosystem and Green Deal Data Space, as well as multiple national data spaces. These provide access to data, through streamlined access, on-board processing and online visualization generating actionable knowledge and supporting more effective decision-making. Global EO datasets and space research monitoring data can be accessed via API, creating analysis ready data for machine learning applications. In unison, these advances in data availability, and processing capacity are transformative for geoscience research and industry.
However, building these data spaces is often challenging. Geographically, there is significant variation in policy surrounding data sharing, privacy, public-private partnerships, and intellectual property leading to variation in the design and implementation of data spaces. Moreover, there are major technical challenges to the operating data spaces, including the rapid development of computing power and differences in the features between traditional and cloud-based IT systems. Finally, there are cultural challenges, as potential users are often reluctant to embrace new technologies. Therefore, we need a deeper understanding of data spaces; their design, creation, integration, and examine how they can optimize and improve workflows.
This session connects developers and users of data spaces, showing how these systems facilitate the sharing, integration, and flexible processing of geographic data from diverse sources. We invite contributions from non-earth science-based data spaces to share the lessons they have learned that are relevant for EO data spaces and their users too, as well as projects in their infancy. The speakers will discuss how ongoing efforts to build data spaces will connect with existing initiatives on data sharing and processing, and present examples of innovative services that can be built upon data spaces.
Data Spaces are born in the big data paradigm where many sources produce constant streams of Earth observation data (remote sensing, citizen science, IoT sensors and in-situ). The traditional organization and computation of data is no longer efficient as data is constantly evolving and mixed in new ways, thus making more difficult to extract knowledge and even more in a standardized way.
The Green Deal Data Space (GDDS) is the solution provided by the EC to use the major potential of data in support of the Green Deal priority actions on climate change, circular economy, pollution, biodiversity, and deforestation. In this context, an increasing number of in-situ sensors producing observations of a reasonable quality are being deployed in several fields, i.e., air quality, water quality, animal camera trapping..., but also in the form of citizens collecting environmental data through mobile apps, online forms, etc. Many of these systems have wireless connections which allow to automatically make data available into central repositories (such as Research Infrastructures, Digital twins and Data lakes) that can then be part of a Data Space.
This session aims to explore new in-situ data solutions in the context of the GDDS to leverage data integration and data management in a more efficient way, particularly, but not limited to:
- systematized in-situ observation requirements gathering/formulation to address data needs and gaps,
- sensors deployment, collection and integration in servers following geospatial standards (i.e. STAplus, Camtrap DP, etc.),
- semantic uplifting for citizen science data cataloguing and searching,
- authentication issues for sensible data transferring,
- integration, computation and analysis of in-situ (sensors, citizen science, field campaigns) into and/with satellite-based data,
- sensor requesting, integration and visualization tools,
- in-situ data cataloguing initiatives,
- others.
In ESS, researchers tackle critical questions that have significant implications for society. Addressing these complex issues often requires access, integration, and analysis of data across scales and disciplinary boundaries. To achieve this, researchers need sustainable support in data management and collaborative analysis throughout the entire data lifecycle. Research data infrastructures (RDIs) can provide sharing and management of research products in a systematic way to enable interoperable research. Through their offerings and services, they can shape research practices and are strongly connected with the communities of users who identify and associate themselves with them. For this, fostering FAIRness and openness, e.g. by applying established standards for metadata, data, and/or scientific workflows, and implementing innovative concepts such as FAIR Digital Objects (FDO) to ensure that data can be reused by both humans and machines across different platforms and disciplines, is crucial.
Even though it is clear that RDIs are indispensable for solving big societal problems, their wide adoption requires a cultural change within research communities. At the same time, RDIs themselves must be developed further to serve generic and domain specific user applications. The sustainability of RDIs must be improved, international cooperation must be increased, and duplication of development efforts must be avoided. To be able to provide a community of diverse career stages and backgrounds with a convincing infrastructure that is established beyond national and institutional boundaries, new collaboration patterns and funding approaches must be tested so that RDIs foster cultural change in academia and be a reliable foundation for FAIR and open research.
This session provides a forum and welcomes contributions about:
- user stories, use cases, storylines, that show how and why using data across scales and disciplines is important.
- innovative infrastructure concepts on all scales, ranging from regional to international
- findings on user needs and methods or concepts to develop high-quality user interfaces, such as portals
- solutions that demonstrate building blocks/components for specific research infrastructures, e.g., tackling interoperability challenges
- approaches to provide or reuse sustainable software solutions
- ideas to enable cultural change and international collaboration
This session will explore the extensive potential of Google Earth Engine and its significant impact on addressing various geospatial challenges. With its wide-ranging datasets and powerful processing capabilities, Google Earth Engine empowers researchers and professionals to address crucial issues such as biodiversity loss, climate change, and human impacts on the environment with exceptional precision and efficiency. The session will show the diverse applications of Google Earth Engine's advanced tools and comprehensive catalog, illustrating how they advance our understanding of Earth's system, support vital research, and contribute to global sustainability. We will explore its utilization in environmental monitoring, conservation efforts, urban planning, agriculture, disaster management, and more, showcasing how this technology equips scientists, organizations, and decision-makers to address some of the most critical issues at local and global scales.
We welcome contributions that highlight innovative cloud-based solutions, including applications, case studies, databases, remote sensing products, tool development, methodological breakthroughs, integration efforts, and discussions on geospatial analysis challenges and future directions. This session provides a unique opportunity to present your work, inspire others, and envision the future of geospatial analysis with Google Earth Engine.
The concept of creating a digital replica of the Earth has emerged as a revolutionary approach to tackle challenges of resilience to climate change or severe weather events, and sustainable development. This session explores the developments and applications of highly detailed digital models of the Earth systems that monitor and simulate natural phenomena, hazards and the related human activities. By leveraging advanced computational techniques, real-time data, and predictive modelling, these digital Earth twins provide, together with relevant exploitation services, a powerful platform for analysing climate change impacts, optimising resource management, and designing effective adaptation and mitigation strategies.
We invite contributions that showcase cutting-edge innovations in Earth system modelling, present case studies demonstrating the practical application of digital replicas in weather or climate change impacts and sustainability planning, and engage in discussions about how societal implications can best be integrated into these systems. A key emphasis will be on the potential of Earth replicas to offer tailored insights and services for policymakers, empowering them to make informed decisions regarding climate adaptation, disaster risk management, and sustainable policies.
Seismological and Geophysical research consistently uses sophisticated tools for data analysis, modelling, and interpretation. Evidently, the rapid development and diversification of research software pose challenges in maintaining code quality, ensuring comprehensive documentation, achieving reproducibility of results, and enabling uninterrupted workflows comprising various tools for seamless data analysis. As researchers increasingly rely on complex computational tools, it becomes essential to address these challenges in scientific software development, to avoid inefficiencies and errors and to ensure that scientific findings are reliable and can be built upon by future researchers.
We welcome contributions that introduce software tools/toolboxes and their real-world applications, showcasing how they have advanced the field, providing practical insights into the development/application process. Additionally, we seek presentations that discuss methodologies for software testing, continuous integration in software projects, upgrades and deployment. Moreover, we are looking for case studies demonstrating the successful implementation of these tools in various seismological/geophysical problems and how these can bring value to the community.
Sharing of resources, toolboxes, and knowledge is encouraged to improve the overall quality and (re)usability of research software. We encourage the inclusion of demonstrations to showcase usability and functionality examples, as well as videos to illustrate proposed workflows. Videos and other resources can be added as supplementary material and will be available after the conference. Depending on the technical setup and the time available, we will also support live demonstrations for the on-site participants.
We warmly invite seismologists, geophysicists, software developers, and researchers to participate in this session and share their insights, experiences, and solutions to elevate software development standards and practices in our field. Join us to contribute to and learn from discussions that will drive innovation and excellence in seismological and geophysical research.
In an era marked by increasing hazards, dwindling natural resources, and the undeniable effects of climate change, it is imperative to leverage cutting-edge technologies to confront these pressing challenges at the grassroots level. The proposed scientific session aims to assemble a cohort of experts and researchers spanning diverse disciplines to explore innovative approaches and solutions using time series aerial and satellite remote sensing data combined with geospatial technology software. The session will investigate the practical applications of these technologies in understanding and mitigating local challenges related to hazards, natural resources, and climate change.
The conveners extend a warm invitation for contributions from researchers in applied and theoretical domains, emphasizing innovative methodologies and practical applications of time-series imagery from satellites and aerial sources to explore innovative solutions for tackling local challenges related to hazards, natural resources, and climate change. We encourage using data acquired across the electromagnetic spectrum, encompassing optical and Synthetic Aperture Radar (SAR) data, obtained globally through aerial and satellite missions.
The session objectives provide a platform for in-depth exploration of innovative solutions at the intersection of remote sensing, geospatial technology, and local environmental challenges.
Session Objectives:
[1] Showcasing Technological Advancements: Highlight the latest progressions in remote sensing technology and geospatial software that facilitate the monitoring and analysis of local hazards, natural resources, and the impacts of climate change.
[2] Case Studies and Research Findings: Engage in discussions centered around case studies and research outcomes that underscore the efficacy of time series remote sensing data in effectively addressing specific local challenges.
[3] Promoting Interdisciplinary Collaboration: Foster a collaborative environment among researchers, policymakers, and practitioners to cultivate actionable strategies for tackling local issues through interdisciplinary cooperation.
Fast and reliable access to large datasets is the fundament of hydrological research. According to the FAIR principles, sustainable research data should be findable, accessible, interoperable, and reusable in a way that the reproducibility of research experiments is guaranteed. There are several global and regional hydrological databases that are providing harmonized data from different data sources. Thereby they serve as archives, as well as an intermediate between data providers and users. The great value of the databases is shown in the diversity of studies, assessments and data products originating from the provided data, supporting the integrative understanding of the hydrologic cycle. At national and international levels, these databases are also used for the assessment of water resources for policy guidance.
This session aims to show ideas, concepts, efforts and challenges in developing data products as well as demonstrating the benefit of setting up, maintaining networks, and sharing data in order to support the data acquisition ambitions of data centres. This session contributes to IHP IX (2022 - 2029) goal, which puts science, research and management into action for a water secure world. We invite contributions on the following topics:
1. Data services: processing, quality assurance and data discovery
- Methods and challenges of collection and provision of reliable data and metadata to the science community
- Improvement in database services e.g. versioning, dissemination or integration of new features that are relevant to science and research applications
- Development of ontologies and reference datasets showing how metadata can be used to streamline data findability
2. Tools and data-derived products for integrative observation of the hydrologic cycle
- Integrated data products derived from the analysis of existing databases
- Tools and platforms for data exchange and exploration
- Collaborative and interoperable data platforms to create a contextual and unified analysis for better decision making
3. From data to action: role of data services in operational hydrology
- Data-driven studies and projects that aim to support decision making and policy formulation
- Studies showing the contribution of large data services to assessing water resources at national, regional and global scales
- Case studies demonstrating the benefits of operational observation networks to improve local, regional and global hydrological products and services
Recent Earth System Sciences (ESS) datasets, such as those resulting from very high resolution numerical modelling, have increased both in terms of precision and size. These datasets are central to the advancement of ESS for the benefit of all stakeholders, public policymaking on climate change and to the performance of modern applications such as Machine Learning (ML) and forecasting.
The storage and shareability of ESS datasets have become an important discussion point in the scientific community. It is apparent that datasets produced by state-of-the-art applications are becoming so large that even current high-capacity data centres and infrastructures are incapable of storing, let alone ensuring the usability and processability of such datasets. The needs of ongoing and upcoming community activities, such as various digital twin centred projects or the 7th Phase of the Coupled Model Intercomparison Project (CMIP7) already stretch the abilities of current infrastructures. With future investment in hardware being limited, a viable way forward is to explore the possibilities of data reduction and compression with the needs of stakeholders in mind. Therefore, the use of data compression has grown in interest to 1) make the data weight more manageable, 2) speed up data transfer times and resource needs and 3) without reducing the quality of scientific analyses.
Concurrently, replicability is another major concern for ESS and downstream applications. Being able to reproduce the most recent ML and forecasting results and analyses thereof has become mandatory to develop new methods and integrated workflows for operational settings. On the other hand, the data accuracy needed to produce reliable downstream products has not yet been thoroughly investigated. Therefore, research on data reduction and prediction interpretability helps to 1) understand the relationship between the datasets and the resulting prediction and 2) increase the stability of prediction.
This session discusses the latest advances in both data compression and reduction for ESS datasets, focusing on:
1) Approaches and techniques to enhance shareability of high-volume ESS datasets: data compression (lossless and lossy) or reduction approaches.
2) Understanding the effects of reduction and replicability: feature selection, feature fusion, sensitivity to data, active learning.
3) Analyses of the effect of reduced/compressed data on numerical weather prediction and/or machine learning methods.
We invite general and domain science perspectives and experiences on the challenges related to energy efficiency in large-scale computing related to earth and space sciences research and applications. AI-enabled solutions will require integrating multimodal data while being cognizant of the energy demands introduced by petabytes of data and artificial intelligence and computational methods. Planetary scale simulations and AI solutions are energy-intensive. The energy footprint can be minimized by optimizing the energy value chain across the computing continuum involving algorithms, applications, methods, data management, hardware and infrastructure. Reduced and mixed precision algorithms on supported hardware offer the
potential to minimize computational and energy costs. Data reduction and feature optimization techniques are essential for data efficiency. The multimodality of the diverse data collections presents special challenges in optimizing the amount of data used in foundation models. There
is little guidance in the research community on developing a computational plan for the optimal use of the resources for pretraining and inferencing frontier AI models using multimodal scientific data. Partnerships and collaborations across international agencies, academia and
industry at the working level are essential for synergizing our efforts toward AI solutions.
The session will discuss advances, early results and best practices related to energy efficiency at all scales across the computing continuum. We invite submissions related (but not limited) to the following topics on energy efficiency in computing: applications, algorithms, data management, and hardware solutions, including domain-specific hardware, workflows, and integrated approaches. The session also encourages discussion on partnerships, collaborations and broad community involvement toward energy-efficient solutions.
Pangeo (pangeo.io) is a global community of researchers and developers that tackle big geoscience data challenges in a collaborative manner using laptop to HPC and Cloud infrastructure. This session's aim is:
to motivate researchers who are using or developing in the Pangeo ecosystem to share their endeavours with a broader community that can benefit from these new tools.
to contribute to the Pangeo community in terms of potential new applications for the Pangeo ecosystem, containing the following core packages: Xarray, Iris, Dask, Jupyter, Zarr, Kerchunk and Intake.
We warmly welcome contributions that detail various Cloud computing initiatives within the domains of Earth Observation and Earth System Modelling, including but not limited to:
- Cloud federations, scalability and interoperability initiatives across different domains, multi-provenance data, security, privacy and green and sustainable computing.
- Cloud applications, infrastructure and platforms (IaaS, PaaS SaaS and XaaS).
- Cloud-native AI/ML frameworks and tools for processing data.
- Operational systems on the cloud.
- Cloud computing and HPC convergence and workload unification for EO data processing.
Also, presentations using at least one of Pangeo’s core packages in any of the following domains:
- Atmosphere, Ocean and Land Models
- Satellite Observations
- Machine Learning
- And other related applications
We welcome any contributions in the above themes presented as science-based in other EGU sessions, but more focused on research, data management, software and/or infrastructure aspects. For instance, you can showcase your implementation through live executable notebooks.
We call researchers working with continental and global-scale dataset for producing time-series of predictions of environmental variables especially the ones focused on the essential variables. Radeloff et al. (2024) (the Landsat science team) have proposed 13 essential and many more desirable/ aspirational products using medium resolution imagery referred to as “Medium-resolution satellite image-based products that meet the identified information needs for sustainable management, societal benefits, and global change challenges”. The desirable products include: maps of crop types, irrigated fields, land abandonment, forest loss agents, LAI/FAPAR, green vegetation cover fraction, emissivity, ice sheet velocity, surface water quality and evaporative stress. The aspirational land monitoring products include: forest types, and tree species, urban structure, forest recovery, crop yields, forest biomass, habitat heterogeneity and winter habitat indices, net radiation, snow and ice sheet surface melt, ice sheet and glacier melt ponds, sea ice motion and evaporation and transpiration. We will discuss modeling approaches, hybrid data science / process-based models and methods for accuracy assessment and visualization of uncertainty. Once one produced time-series of predictions, these can be further used to analyze trends and detect potential ecosystem degradation of restoration.
From the real-time integration of multi-parametric observations is expected the major contribution to the development of operational t-DASH systems suitable for supporting decision makers with continuously updated seismic hazard scenarios. A very preliminary step in this direction is the identification of those parameters (seismological, chemical, physical, biological, etc.) whose space-time dynamics and/or anomalous variability can be, to some extent, associated with the complex process of preparation of major earthquakes.
This session wants then to encourage studies devoted to demonstrate the added value of the introduction of specific, observations and/or data analysis methods within the t-DASH and StEF perspectives. Therefore, studies based on long-term data analyses, including different conditions of seismic activity, are particularly encouraged. Similarly welcome will be the presentation of infrastructures devoted to maintain and further develop our present observational capabilities of earthquake related phenomena also contributing in this way to build a global multi-parametric Earthquakes Observing System (EQuOS) to complement the existing GEOSS initiative.
To this aim this session is not addressed just to seismology and natural hazards scientists but also to geologist, atmospheric sciences and electromagnetism researchers, whose collaboration is particular important for fully understand mechanisms of earthquake preparation and their possible relation with other measurable quantities. For this reason, all contributions devoted to the description of genetic models of earthquake’s precursory phenomena are equally welcome.
Co-organized by EMRP1/ESSI2/GI6, co-sponsored by
JpGU and EMSEV
Modern large datasets lay the foundation for our understanding of the world, regardless of if we’re utilizing traditional statistical approaches or applying modern AI/ML approaches. In recent years, the terrestrial data being collected by governmental organizations, research and NGOs is being increasingly complemented through products generated from remote sensing missions, leading to an increasingly rich set of derived products. The European strategy to support such efforts is a vision of Common European Data Spaces, that will make more data available for access and reuse. In addition to prioritizing personal data protection, consumer safeguards, there is a strong focus on upholding the FAIR principles. However, the requirements for enabling FAIR use of this data remain the same, all data must be findable, accessible, interoperable and reusable. While great advances have been made in making data stemming from terrestrial data acquisition activities FAIR, there seems to be a bit of a lag pertaining to products generated from satellite data, unfortunately hindering FAIR access. This pertains to the following aspects of FAIR:
- Findable: provision of rich metadata as required to both identify relevant datasets and fully understand their provenance, in order to determine applicability to a specific endeavor
- Accessible: at present there are diverse web services and APIs tailored for provision of raw satellite data; these data access approaches show deficits when dealing with derived products
- Interoperable: related to the accessibility issue stated above, the formats utilized for raw satellite data are not sufficient for derived products, standardized interchange formats need to be tailored to this purpose
- Reusable: tools are available for analysis and processing of gridded data, but are often not aware of caveats entailed by the nature of available datasets.
Within this session, jointly organised by 4 European projects (AD4GD, USAGE, FAIRiCUBE and BCUBED), we will be bringing examples of international initiatives that aim to resolve these issues and increase the FAIRness of the ever increasing amount of data being made available from diverse sources. This will help to guide the creation of the European Dataspaces, bringing them to their full potential in supporting both policy and society in facing future challenges.
Database documentation and sharing is a crucial part of the scientific process, and more scientists are choosing to share their data on centralised data repositories. These repositories have the advantage of guaranteeing immutability (i.e., the data cannot change), which is not so amenable to developing living databases (e.g., in continuous citizen science initiatives). At the same time, citizen science initiatives are becoming more and more popular in various fields of science, from natural hazards to hydrology, ecology and agronomy.
In this context, distributed databases offer an innovative approach to both data sharing and evolution. These systems have the distinct advantage of becoming more resilient and available as more users access the same data, and as distributed systems, contrarily to decentralised ones, do not use blockchain technology, they are orders of magnitude more efficient in data storage as well as completely free to use. Distributed databases can also mirror exising data, so that scientists can keep working in their preferred Excel, OpenOffice, or other software while automatically syncing database changes to the distributed web in real time.
This workshop will present the general concepts behind distributed, peer-to-peer systems. Attendees will then be guided through an interactive activity on Constellation, a scientific software for distributed databases, learning how to both create their own databases as well as access and use others' data from the network. Potential applications include citizen science projects for hydrological data collection, invasive species monitoring, or community participation in managing natural hazards such as floods.
Interdisciplinarity is becoming a common approach to solve socio-ecological problems, but datasets from different disciplines often lack interoperability. In this SC we will explore interoperability levels in the context of integrated research infrastructure services for the climate change crisis.
WEkEO offers a single access point to all of the environmental data provided by the Copernicus programme, as well as additional data from its four partner organisations. While data access is the first step for research based on EO data, the challenges of handling data soon become overwhelming with the increasing volume of Earth Observation data available. To cope with this challenge and to tame the Big Earth Data, WEkEO offers a cloud-based processing service for Earth Observation data coming from the Copernicus programme and beyond.
This course will explain new trends and developments in accessing, analysing and visualizing earth observation data by introducing concepts around serverless processing, parallel processing of big data and data cube generation in the cloud.
The session will begin with a theoretical introduction to cloud-based big data processing and data cube generation, followed by a demonstration how the participants can utilize these concepts within the WEkEO environment using its tools. Participants will have the opportunity to apply the concepts and tools in multi-disciplinary environmental use cases bringing together different kinds of satellite data and earth observation products as data cubes in the cloud.
The course will start with a beginner-level introduction and demonstration before introducing more advanced functionalities of the WEkEO services. Prior knowledge of satellite data analysis/Python programming would be an advantage but is not a prerequisite. Comprehensive training material will be provided during the course to ensure that participants with varying degree of knowledge of data processing can follow and participate.
The analysis and visualisation of data is fundamental to research across the earth and space sciences. The Pangeo (https://pangeo.io) community has built an ecosystem of tools designed to simplify these workflows, centred around the Xarray library for n-dimensional data handling and Dask for parallel computing. In this short course, we will offer a gradual introduction to the Pangeo toolkit, through which participants will learn the skills required to scale their local scientific workflows through cloud computing or large HPC with minimal changes to existing codes.
The course is beginner-friendly but assumes a prior understanding of the Python language. We will guide you through hands-on jupyter notebooks that showcase scalable analysis of in-situ, satellite observation and earth system modelling datasets to apply your learning. By the end of this course, you will understand how to:
- Efficiently access large public data archives from Cloud storage using the Pangeo ecosystem of open source software and infrastructure.
- Leverage labelled arrays in Xarray to build accessible, reproducible workflows
- Use chunking to scale a scientific data analysis with Dask
All the Python packages and training materials used are open-source (e.g., MIT, Apache-2, CC-BY-4). Participants will need a laptop and internet access but will not need to install anything. We will be using the free and open Pangeo@EOSC (European Open Sicence Cloud) platform for this course. We encourage attendees from all career stages and fields of study (e.g., atmospheric sciences, cryosphere, climate, geodesy, ocean sciences) to join us for this short course. We look forward to an interactive session and will be hosting a Q&A and discussion forum at the end of the course, including opportunities to get more involved in Pangeo and open source software development. Join us to learn about open, reproducible, and scalable Earth science!
Preparation: We recommend learners with no prior knowledge of Python review resources such as the Software Carpentry training material and Project Pythia in advance of this short course. Participants should bring a laptop with an internet connection. No software installation is required as resources will be accessed online using the Pangeo@EOSC platform. Temporary user accounts will be provided for the course and we will also teach attendees how to request an account on Pangeo@EOSC to continue working on the platform after the training course.
Addressing global environmental and societal challenges requires interdisciplinary, data-driven approaches. Today’s research produces unprecedented volumes of complex data and an increasing number of interactive data services, putting traditional information management systems to the test. Collaborative infrastructures are challenged by their dual role of advancing research and scientific assessments while facilitating transparent data and software sharing.
We invite abstracts from all data stakeholders that highlight innovative platforms, frameworks, systems, and initiatives designed to enhance access and usability of data for research on topics such as climate change, natural hazards, sustainable development, etc. We welcome presentations describing collaborations across national and disciplinary boundaries on infrastructure, standards, governance, best practices, and future directions for building trustworthy and interoperable data networks, guided by UNESCO’s Open Science recommendations, the FAIR and CARE data principles, that enable researchers worldwide to address pressing global problems through data.
Solicited authors:
Reyna Jenkyns
Co-organized by ERE1/GI2, co-sponsored by
AGU and JpGU
Almost a decade ago, the FAIR data guiding principles were introduced to the broader research community. These principles proposed a framework to increase the reusability of data in and across domains during and after the completion of e.g. research projects. In subdomains of the Earth System Sciences (ESS), like atmospheric sciences or partly geosciences, data reuse across institutions and geographical borders was already well-established, supported by community-specific and cross-domain standards like netCDF-CF, geospatial standards (e.g.OGC). Further, authoritative data producers such as CMIPs were already using Persistent Identifiers and corresponding handle systems for data published in their repositories – so it was often thought and communicated this data is “FAIR by design”.
However, fully implementing FAIR principles, particularly machine-actionability—the core idea behind FAIR—has proven challenging. Despite progress in awareness, standard-compliant data sharing, and the automation of data provenance, the ESS community continues to struggle to reach a community-wide consensus on the design, adoption, interpretation and implementation of the FAIR principles.
In this session, we invite contributions from all fields in Earth System Sciences that provide insights, case studies, and innovative approaches to advancing the adoption of the FAIR data principles. We aim to foster a collaborative dialogue on the progress our community has made, the challenges that lie ahead, and the strategies needed to achieve widespread acceptance and implementation of these principles, ultimately enhancing the future of data management and reuse.
We invite contributions focusing on, but not necessarily limited to,
- Challenges and solutions in interpreting and implementing the FAIR principles in different sub-domains of the ESS
- FAIR onboarding strategies for research communities
- Case studies of successful FAIR data implementation (or partial implementation) in ESS at infrastructure and research project level
- Methods and approaches to gauge the impact of FAIR data implementation in ESS
- Considerations on how AI might help to implement FAIR
- Future direction for FAIR data in ESS
Solicited authors:
Robert Huber,Christine Kirkpatrick
Performing research in Earth System Science is increasingly challenged by the escalating volumes and complexity of data, requiring sophisticated workflow methodologies for efficient processing and data reuse. The complexity of computational systems, such as distributed and high-performance heterogeneous computing environments, further increases the need for advanced orchestration capabilities to perform and reproduce simulations effectively. On the same line, the emergence and integration of data-driven models, next to the traditional compute-driven ones, introduces additional challenges in terms of workflow management. This session delves into the latest advances in workflow concepts and techniques essential to address these challenges taking into account the different aspects linked with High-Performance Computing (HPC), Data Processing and Analytics, and Artificial Intelligence (AI).
In the session, we will explore the importance of the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles and provenance in ensuring data accessibility, transparency, and trustworthiness. We will also address the balance between reproducibility and security, addressing potential workflow vulnerabilities while preserving research integrity.
Attention will be given to workflows in federated infrastructures and their role in scalable data analysis. We will discuss cutting-edge techniques for modeling and data analysis, highlighting how these workflows can manage otherwise unmanageable data volumes and complexities, as well as best practices and progress from various initiatives and challenging use cases (e.g., Digital Twins of the Earth and the Ocean).
We will gain insights into FAIR Digital Objects, (meta)data standards, linked-data approaches, virtual research environments, and Open Science principles. The aim is to improve data management practices in a data-intensive world.
On these topics, we invite contributions from researchers illustrating their approach to scalable workflows as well as data and computational experts presenting current approaches offered and developed by IT infrastructure providers enabling cutting edge research in Earth System Science.
In recent decades, the advent in geoinformation technology has played an increasingly important role in determining various parameters that characterize the Earth's environment. These technologies often combined with conventional field surveying and spatial data analysis methods and/or simulation process models provide efficient means for monitoring and understanding Earth’s environment in a cost-effective and systematic manner. This session invites contributions focusing on modern open-source software tools developed to facilitate the analysis of mainly geospatial data in any branch of geosciences for the purpose of better understanding Earth’s natural environment. We encourage the contribution of any kind of open source tools, including those that are built on top of global used commercial GIS solutions. Potential topics for the session include the presentation of software tools developed for displaying, processing and analysing geospatial data and modern cloud webGIS platforms and services used for geographical data analysis and cartographic purposes. We also welcome contributions that focus on presenting tools that make use of parallel processing on high performance computers (HPC) and graphic processing units (GPUs) and also on simulation process models applied in any field of geosciences.
Code is read far more often than it's written, yet some still believe that complex, unreadable code equates to a better algorithm. In reality, the opposite is true. Writing code that not only works but is also clear, maintainable, and easy to modify can significantly reduce the cognitive load of coding, freeing up more time for scientific research. This short course introduces essential programming practices, from simple yet powerful techniques like effective naming, to more advanced topics such as unit testing, version control, and managing virtual environments. Through real-life examples, we will explore how to transform code from convoluted to comprehensible.
Software plays a pivotal role in various scientific disciplines. Research software may include source code files, algorithms, computational workflows, and executables. It refers mainly to code meant to produce data, less so, for example, plotting scripts one might create to analyze this data. An example of research software in our field are computational models of the environment. Models can aid pivotal decision-making by quantifying the outcomes of different scenarios, e.g., varying emission scenarios. How can we ensure the robustness and longevity of such research software? This short course teaches the concept of sustainable research software. Sustainable research software is easy to update and extend. It will be easier to maintain and extend that software with new ideas and stay in sync with the most recent scientific findings. This maintainability should also be possible for researchers who did not originally develop the code, which will ultimately lead to more reproducible science.
This short course will delve into sustainable research software development principles and practices. The topics include:
- Properties and metrics of sustainable research software
- Writing clear, modular, reusable code that adheres to coding standards and best practices of sustainable research software (e.g., documentation, unit testing, FAIR for research software).
- Using simple code quality metrics to develop high-quality code
- Documenting your code using platforms like Sphinx for Python
We will apply these principles to a case study of a reprogrammed version of the global WaterGAP Hydrological Model (https://github.com/HydrologyFrankfurt/ReWaterGAP). We will showcase its current state in a GitHub environment along with example source code. The model is written in Python but is also accessible to non-python users. The principles demonstrated apply to all coding languages and platforms.
This course is intended for early-career researchers who create and use research models and software. Basic programming or software development experience is required. The course has limited seats available on a first-come-first-served basis.
Python is one of the fastest growing programming languages and has moved to the forefront in the earth system sciences (ESS), due to its usability, the applicability to a range of different data sources and, last but not least, the development of a considerable number of ESS-friendly and ESS-specific packages.
This interactive Python course is aimed at ESS researchers who are interested in adding a new programming language to their repertoire. Except for some understanding of fundamental programming concepts (e.g. loops, conditions, functions etc.), this course presumes no previous knowledge of and experience in Python programming.
The goal of this course is to give the participants an introduction to the Python fundamentals and an overview of a selection of the most widely-used packages in ESS. The applicability of those packages ranges from (simple to advanced) number crunching (e.g. Numpy), to data analysis (e.g. Xarray, Pandas) to data visualization (e.g. Matplotlib).
The course will be grouped into different sections, based on topics discussed, packages introduced and field of application. Furthermore, each section will have an introduction to the main concepts e.g. fundamentals of a specific package and an interactive problem-set part.
This course welcomes active participation in terms of both on-site/virtual discussion and coding. To achieve this goal, the i) course curriculum and material will be provided in the form of Jupyter Notebooks ii) where the participants will have the opportunity to code up the iii) solutions to multiple problem sets and iv) have a pre-written working solution readily available. In these interactive sections of the course, participants are invited to try out the newly acquired skills and code up potentially different working solutions.
We very much encourage everyone who is interested in career development, data analysis and learning a new programming to join our course.
Julia offers a fresh approach to scientific computing, high-performance computing and data crunching. Recently designed from the ground up, Julia avoids many of the weak points of older, widely used programming languages in science such as Python, Matlab, and R. Julia is an interactive scripting language, yet it executes with similar speed as C(++) and Fortran. Its qualities make it an appealing tool for the geoscientist.
Julia has been gaining traction in the geosciences over the last years in applications ranging from high-performance simulations, data processing, geostatistics, machine learning, differentiable programming to scientific modelling. The Julia package ecosystem necessary for geosciences has substantially matured, which makes it readily usable for research.
This course provides a hands-on introduction to get you started with Julia as well as a showcase of geodata visualisation and ocean, atmosphere and ice simulations.
The hands-on introduction will cover:
- learn about the Julia language and what sets it apart from others
- write simple Julia code to get you started with scientific programming (arrays, loops, input/output, etc.)
- installing Julia packages and management of package environments (similar, e.g., to virtual-environments in Python)
The show-case will feature a selection of:
- Visualisation of Geo-Data using the plotting library Makie.jl and various geodata libraries
- Global ocean modelling with Oceananigans.jl on CPUs and GPUs
- Interactive atmospheric modelling with SpeedyWeather.jl
- Ice flow modelling and data integration and sensitivity analysis using automatic differentiation on GPUs with FastIce.jl
Ideally, participants should install Julia on their laptops to allow a smooth start into the course. We will provide detailed documentation for this installation. However, we will also provide a JupyterHub, albeit connectivity to it maybe spotty depending on Wi-Fi reception.
Imagine you have a wonderful and ancient butterfly collection that you would like to catalogue and digitalize. And in such a way that other scientists have added value from it. What additional considerations would you need, apart from the technical requirements? What’s the right way to describe your collection, so that not only specialists can find the data, but also an engineer, looking for inspiration for new aerodynamic concepts or an artist, looking for specific, natural colours, patterns or shapes.
Working together with specialists from your own research field is easy – you think in the same way, use the same language, the same vocabulary. It's the same in every discipline, in every field of research, and it's the interdisciplinarity that is the challenge. In Earth System Science the number of disciplines is huge, as is the number of well known community standards, controlled vocabularies, ontologies … without the translation (mapping) from one discipline to another, without some help/tool to collect and find such terminologies – it’s chaos.
In this course we want to analyze together the gaps that arise in your daily research activities due to interdisciplinary work and projects. What are the misunderstandings, the data misinterpretations you are facing? In an interactive process you will learn more about a solution to bring order to the chaos - terminologies and the services around them. The aim is to make it easier for you to produce and/or understand reusable data in the future. FAIR is more than a buzzword - let's bring it to life!
The ongoing atmospheric warming has a huge effect on the components of terrestrial cryosphere. Therefore, there is a great need to have operational monitoring networks to understand the impacts and long-term changes of the cryosphere. Global Observing Monitoring Systems recognize three major Essential Climate Variables in permafrost, which are permafrost temperature, active layer thickness and rock glacier velocity. The monitoring of these parameters is covered by groups of Global Terrestrial Network – Permafrost (GTN-P) and Rock Glacier Inventories and Kinematics (RGIK), which will collaborate on the Short Course organisation. Our aim is to provide the participants: a) general background on GTN-P and RGIK activities b) The latest updates and demonstration of new version of GTN-P database c) Current development of RGIK database and monitoring standards
Structural geologic modeling is a crucial part of many geoscientific workflows. There is a wide variety of applications, from geological research to applied fields such as geothermics, geotechnical engineering, and natural resource exploration. This short course introduces participants to GemPy, a powerful and successful open-source Python library for 3D geological modeling. GemPy allows users to generate geological models efficiently and integrates well with the Python ecosystem, making it a valuable resource for both researchers and industry professionals.
The course will cover the following topics:
(1) Modeling approach and theoretical background: An introduction to the principles behind implicit structural geological modeling.
(2) Creating a GemPy model: A step-by-step guide to building a basic geological model using GemPy.
(3) Open Source Project: An overview of GemPy's development, community, and opportunities for contribution.
(4) Outlook and applications: Exploration of the wide-ranging applications of GemPy in various geoscientific fields, including the link to geophysical inversion. Coding examples for advanced functionalities and examples from publications.
This interactive course is designed for both beginners and those with some experience in geological modeling. Participants are encouraged to bring a laptop to actively engage in the tutorials and apply their newly acquired skills by writing their own code. While basic Python knowledge and a pre-installed Python environment are beneficial, they are not mandatory for participation.
In April 2023, EPOS, the European Plate Observing System launched the EPOS Data Portal (https://www.ics-c.epos-eu.org/), which provides access to multidisciplinary data, data products, services and software from solid Earth science domain. Currently, ten thematic communities provide input to the EPOS Data Portal through services (APIs): Anthropogenic Hazards, Geological Information and Modelling, Geomagnetic Observations, GNSS Data and Products, Multi-Scale Laboratories, Near Fault Observatories, Satellite Data, Seismology, Tsunami and Volcano Observations.
The EPOS Data Portal enables search and discovery of assets thanks to metadata and visualisation in map, table or graph views, including download of the assets, with the objective to enable multi-, inter- transdisciplinary research by following FAIR principles.
This short course will provide an introduction to the EPOS ecosystem, demonstration of the EPOS Data Portal and hands-on training by following a scientific use case using the online portal. It is expected that participants have scientific background in one or more scientific domains listed above.
The training especially targets young researchers and all those who need to combine multi-, inter- and transdisciplinary data in their research. The use of the EPOS data Portal will simplify data search for Early Career Scientists and potentially help them in accelerating their career development.
Feedback from participants will be collected and used for further improvements of the Data Portal.
OpenStreetMap (OSM) is probably the widest and most known crowdsourced database of geospatial information. Its data have the potential to be harnessed to address a variety of scientific and policy questions, from urban planning to demographic studies, environmental monitoring, energy simulations and many others.
Understanding the structure and the variety of content in OSM can enable researchers and policymakers to use it as a relevant dataset for their specific objectives.
Moreover, familiarity with tools and services for filtering and extracting data per geographic area or topic can empower users to tailor OSM data to meet their unique needs. Additionally, learning to contribute new data to OSM enriches the database and fosters a collaborative environment that supports ongoing geospatial research and community engagement both for researchers themselves and also in interactions with stakeholders and citizens. By actively participating in the OSM community, geoscientists can ensure that the data remains current and relevant, ultimately enhancing the impact of their work in addressing pressing environmental and societal challenges.
The short course will begin with an introduction to the concepts and content of OpenStreetMap, followed by a brief review of services and tools for filtering, extracting, and downloading data. Participants will engage in hands-on activities to contribute new data directly, along with hints and tips on how to understand and evaluate the pros and cons of its open and collaborative foundational principles.
The advancement of Open Science and the affordability of computing services allow for the discovery and processing of large amounts of information, boosting data integration from different scientific domains and blurring traditional discipline boundaries. However, data are often heterogeneous in format and provenance, and the capacity to combine them and extract new knowledge to address scientific and societal problems relies on standardisation, integration and interoperability.
Key enablers of the OS paradigm are ESFRI Research infrastructures, of which ECCSEL (www.eccsel.org), EMSO (https://emso.eu/) and EPOS (www.epos-eu.org), are examples currently enhancing FAIRness and integration within the Geo-INQUIRE project. Thanks to decades of work in data standardisation, integration and interoperability, they enable scientists to combine data from different disciplines and data sources into innovative research to solve scientific and societal questions.
However, while data-driven science is ripe with opportunity to ground-breaking inter- and transdisciplinary results, many challenges and barriers remain.
This session aims to foster scientific cross-fertilization exploring real-life scientific studies and research experiences from scientists and ECS in Environmental Sciences. We also welcome contributions about challenges experienced in connection to data availability, collection, processing, interpretation, and the application of interdisciplinary methods.
A non-exhaustive list of of topics includes:
- multidisciplinary studies involving data from different disciplines, e.g. combining seismology, geodesy, and petrology to understand subduction zone dynamics
- interdisciplinary works, integrating two or more disciplines to create fresh approaches, e.g. merging solid earth and ocean sciences data to study coastal areas and earth dynamics
- showcase activities enabling interdisciplinarity and open science, e.g. enhancing FAIRness of data and services, enriching data provision, enabling cross-domain AI applications, software and workflows, and transnational access and capacity building for ECS
- transdisciplinary experiences that surpass disciplinary boundaries, integrate paradigms and engage stakeholders from diverse backgrounds, e.g. bringing together geologists, social scientists, civil engineers and urban planners to define risk maps and prevention measures in urban planning, or studies combining volcanology, atmospheric, health and climate sciences
Satellite imagery has become critical in documenting and responding to the impacts of armed conflicts. These impacts cover environmental degradation, infrastructure destruction and deterioration of societal functions, such as access to clean water, food security, and public health.
However, there continue to be challenges around alignment and clarity regarding the methodological and ethical considerations of carrying out and collaborating on the environmental and social impacts of remote sensing in humanitarian contexts. The field relies on best practices but has not yet articulated the goals of this and how it encompasses authentication, privacy and scientific standards. These considerations are especially important when using the data in accountability settings, including in criminal prosecution and international tribunals, where satellite data can be used as legal evidence.
This session welcomes presentations showcasing the state of play of using remote sensing to monitor environmental and societal harm in conflict-affected regions. Subsequently, it will raise questions at the frontier of our research discipline: how can we understand the impacts of RS mapping and output sharing on victims, ecosystems, and partners in conflict-affected areas? What are opportunities to involve local stakeholders in data collection and verification? How can RS be most effective in supporting international norm-building and norm-compliance around the costs of war? What protocols should RS research adhere to to ensure relevance in addressing accountability and victim support through international legal frameworks?
By addressing these questions, this session aims to start a conversation among practitioners and academics on how remote sensing can be responsibly and effectively used to address the environmental and societal impacts of armed conflicts while supporting justice and accountability.
The visualization and user-friendly exploration of information from scientific data is one of the main tasks of good scientific practice. But steady increases in temporal and spatial resolutions of modeling and remote sensing approaches lead to ever-increasing data complexity and volumes. On the other hand, earth system science data are getting increasingly important as decision support for stakeholders and other end users far beyond the scientific domains.
This poses major challenges for the entire process chain, from data storage to web-based visualization. For example, (1) the data has to be enriched with metadata and made available via appropriate and efficient services; (2) visualization and exploration tools must then access the often decentralized tools via interfaces that are as standardized as possible; (3) the presentation of the essential information must be coordinated in co-design with the potential end users. This challenge is reflected by the active development of tools, interfaces and libraries for modern earth system science data visualization and exploration.
In this session, we hence aim to establish a transdisciplinary community of scientists, software-developers and other experts in the field of data visualization in order to give a state-of-the-art overview of tools, interfaces and best-practices. In particular, we look for contributions in the following fields:
- Developments of open source visualization and exploration techniques for earth system science data
- Co-designed visualization solutions enabling transdisciplinary research and decision support for non-scientific stakeholders and end-users
- Tools and best-practices for visualizing complex, high-dimensional and high frequency data
- Services and interfaces for the distribution and presentation of metadata enriched earth system science data
- Data visualization and exploration solutions for decentralized research data infrastructures
All contributions should emphasize the usage of community-driven interfaces and open source solutions and finally contribute to the FAIRification of products from earth system sciences.
The Earth System is a complex and dynamic network involving interactions between the atmosphere, oceans, land, and biosphere. Understanding and analyzing data from the Earth System Model is crucial for predicting and mitigating the impacts of climate change.
In this session, we aim to address the challenges of data accessibility and promote integrated use by opening gateways for Earth and environmental sciences. It is essential to enable communication, integration, and data processing for multidisciplinary applications across different fields. This session will highlight current or emerging initiatives, standards, software, or complete IT systems that support environmental data science for a better understanding of the Earth System. The services presented must adhere to the FAIR principles, promoting more robust and transparent research. Additionally, the session will showcase efforts to implement FAIR standards, which are essential for seamless data integration and innovation in Earth Sciences.
We welcome contributions from any origin—whether a European (or not) project, research infrastructure, institute, association, or individual researcher—that presents a vision, service, IT infrastructure, software stack, standard, etc.
In particular, we encourage submission that demonstrate how we can :
- Enhance data by adding semantic annotations or adhering to standards, and ensure its availability through appropriate and efficient services.
- Ensure that metadata, data formats, and access policies align with universally accepted standards to maintain consistency and interoperability.
- Provide tools for analyzing, visualizing and exploring data through interfaces and virtual environments that are both standardized and user-friendly.
- Collaborate with end users to design and present essential information effectively.
- Provide clear and effective training to help users understand and adopt FAIR practices, ensuring they can utilize the services and tools efficiently.
In the last ten years, space exploration has evolved towards a relevant number of landed missions to Mars and the Moon. The lunar exploration program plans to land human crews on the Moon as the first step in the exploration of Mars. On Earth, geoscientists and geologists now have access to remote sensing data, UAVs with imagers, and GNSS positioning and navigational support.
Geologic mapping is happening in the digital domain for planetary and terrestrial geology, but the communities have been generally separated since the Apollo era.
This session will gather Earth and Planetary geoscientists to discuss methods and techniques adopted abroad for modern basic and applied geologic mapping.
Topics:
- innovative techniques and methods for field and remote sensing data collection
- integration of non-common dataset
- digital formats and interoperability/usability of geological maps
- geologic mapping programs from national and space agencies.
- cartographic representation
- derived or specialized maps (alteration, ore/resources, geotechnical)
The outcome would be a broad view of similarities, differences, and open issues in both earth and planetary mapping, creating the opportunity to fill potential gaps.
We will invite some experts in the fields as Keynote Speakers.
Humans have been successfully mapping the remotest and most inhospitable places on Earth, and the surfaces and interiors of other planets and their moons at highest resolution. However, vast areas here on Earth remain blank spots and are located in areas that have not been accessed either through field surveys, geophysical methods or remote sensing due to technical and/or financial challenges. Some of these regions are crucial, as they hold the potential to uncover important geological and habitat information to facilitate future exploration efforts and an overall better understanding of our environment.
Such extreme and remote locations are commonly associated with the ocean floor, or planetary surfaces, but these extreme worlds might also be found in hot deserts, under the ice, in high-mountain ranges, in volcanic edifices, hidden underneath dense canopy cover, or located within the near-surface crust. All such locations are prime targets for remote sensing mapping in a wider sense. The methodological and technical repertoire to investigate extreme and remote locations is thus highly specialized and despite different contexts there are commonalities not only with respect to technical mapping approaches, but also in the way how knowledge is gathered and assessed, interpreted and visualised regarding its scientific but also its economic value.
This session invites contributions to this field of geologic mapping and cartography of extreme (natural) environments with a focus on the scientific synthesis and extraction of information and knowledge.
A candidate contribution might cover, but is not limited to, topics such as:
- ocean mapping using manned and unmanned vehicles and devices,
- offshore exploration using remote sensing techniques,
- crustal investigation through drilling and sampling,
- mapping campaigns in glaciated regions
- subsurface investigation using radar techniques,
- planetary geologic and geophysical mapping,
- geologic investigation of desert environments.
The aim of this session is to bring together researchers mapping environments that are hardly accessible or not accessible at all, thus often relying on geophysical or remote sensing techniques as main source for collecting data and information. We would like to focus on geological and geophysical mapping of spots for which we have no or only very limited knowledge due to the harsh environmental conditions, and we would thus exclude areas that are inaccessible for political reasons.
Geological mapping and modelling is an essential base for geology and geological sciences. It is base for numerous disciplines, e.g. geology, geophysics, geochemistry, mineralogy, engineering geology, hydrogeology etc.. The product are today mostly digital maps and GIS-data and its layers, 2- and 3, eben 4 D models, innovative spatial data analysis and its portrayal and visualisation. The products and outcome will be applicable to and benefit various useages such as finding suitable sites for geothermal energy, geological risk assessment, protecting groundwater resources, coastal protection, mineral exploration etc.
This session invites contributions by scientists mainly working on:
• Geological field mapping
• 2, 3 and 4 D geological models
• Cross-boundary mapping and harmonization
• Building geological information systems
• Data analysis
• Portrayal and presentation,
• Education and training in geological mapping and modelling.
The UN defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. There is a big attention on the shifts in climate patterns due to its threat to human wellbeing and health of the planet. The increasing number of climate extremes and natural disasters, such as wildfires, floods and droughts, has an immediate and direct impact on human health; therefore, there are a lot of adaptation actions in place to make people adjust to actual or expected climate and its effects.
Climate adaptation measures cannot be determined at global or even regional level, because these processes refer to a wide range of measures to reduce vulnerability to climate change impacts. Unfortunately, climate vulnerability is exacerbated in low-income communities and developing countries due to social inequalities (e.g., poverty, lack of access to education and healthcare, and political marginalisation) making it harder to adapt and recover from the effects of climate change. A clear case is the indigenous communities in the Amazon region affected by the increasing number of droughts and consequently frequent wildfires over the past decades. These climate-related natural disasters are likely to drive their displacement and migration. Another scenario is the climate change impact on crop production causing the rise of food prices and undoubtedly having a major impact on people already experiencing poverty and extreme poverty in developing countries, such as Brazil.
Monitoring climate from space is a powerful role from Earth observation satellites since they collect global and repetitive information on important climate components. Earth’s system models are also great resources to understand short- and long-term climate dynamics, which supports us to quantify the effects of climate change on social inequalities and evaluate the efficiency of proposed local actions to climate adaptation and mitigation.
Climate change is not just an environmental issue, but a social justice challenge; therefore, in this session we aim to discuss what it is known so far and what needs further research on the effects of climate change on the most vulnerable communities as well as to present more cases to evidence the limitations and challenges, mainly in Latin America. The session invites presentations that demonstrate the value of Earth System Science to tackle climate-related health inequalities in developing countries.
Understanding Earth's natural processes, particularly in the context of global climate change, has gained widespread recognition as an urgent and central research priority that requires further exploration. Recent advancements in satellite technology, characterized by new platforms with high revisit times and the growing capabilities for collecting repetitive ultra-high-resolution aerial images through unmanned aerial vehicles (UAVs), have ushered in exciting opportunities for the scientific community. These developments pave the way for developing and applying innovative image-processing algorithms to address longstanding and emerging environmental challenges.
The primary objective of the proposed session is to convene scientific researchers dedicated to the field of satellite and aerial time-series imagery. The aim is to showcase ongoing research efforts and novel applications in this dynamic area. This session is specifically focused on presenting studies centred around the creation and utilization of pioneering algorithms for processing satellite time-series data, as well as their applications in various domains of remote sensing, aimed at investigating long-term processes across all Earth's realms, including the sea, ice, land, and atmosphere.
In today's era of unprecedented environmental challenges and the ever-increasing availability of data from satellite and aerial sources, this session serves as a platform to foster collaboration and knowledge exchange among experts working on the cutting edge of Earth observation technology. By harnessing the power of satellite and aerial time-series imagery, we can unlock valuable insights into our planet's complex systems, ultimately aiding our collective efforts to address pressing global issues such as climate change, natural resource management, disaster mitigation, and ecosystem preservation.
The session organizers welcome contributions from researchers engaged in applied and theoretical research. These contributions should emphasize fresh methods and innovative satellite and aerial time-series imagery applications across all geoscience disciplines. This inclusivity encompasses aerial and satellite platforms and the data they acquire across the electromagnetic spectrum.
The coastal zone is globally of great environmental and economic importance, but the stability and sustainability of this region faces many threats. Climate-induced sea level rise, coastal erosion and flooding due to increased storms, and pollution and disturbance of ecosystems are all stresses shaping the present coastline and near-shore environments. These direct impacts on the coast are driving coastline management and marine policies worldwide.
These initiatives rely on key, up-to-date, and repeatable environmental information layers, which are required to effectively monitor coastal change and make informed and coordinated decisions on the sustainable use of coastal and marine resources, in alignment with climate strategies and the protection of coastal areas.
To address this need, advanced methodologies based on remote sensing are becoming more widely used. These techniques have benefited from the surge of Earth Observation data, and advancements in computational and classification algorithms. In recent years, together with the upsurge of cloud computing, there has been a growing focus on new challenges such as sensor and data fusion from multiple sources, and its potential application to effectively monitoring the changes in coastal environments.
This session calls for papers that advance our capability or understanding of the application of Earth Observation remote sensing to coastal zone monitoring, with specific interest in contributions that (1) develop novel methodologies or data fusion workflows in coastal geomorphology, near-shore satellite-derived bathymetry, coastal altimetry, coastal dynamics, water quality and coastal ecosystems (2) include validation and uncertainty budgets, (3) incorporate temporal resolution for monitoring and prediction of coastal change, and (4) impact a wide range of applications.
Sustainable agriculture and forestry face the challenges of lacking scalable solutions and sufficient data for monitoring vegetation structural and physiological traits, vegetation (a)biotic stress, and the impacts of environmental conditions and management practices on ecosystem productivity. Remote sensing from spaceborne, unmanned/manned airborne, and proximal sensors provides unprecedented data sources for agriculture and forestry monitoring across scales. The synergy of hyperspectral, multispectral, thermal, LiDAR, or microwave data can thoroughly identify vegetation stress symptoms in near real-time and combined with modeling approaches to forecast ecosystem productivity. This session welcomes a wide range of contributions on remote sensing for sustainable agriculture and forestry including, but not limited to: (1) the development of novel sensing instruments and technologies; (2) the quantification of ecosystem energy, carbon, water, and nutrient fluxes across spatial and temporal scales; (3) the synergy of multi-source and multi-modal data; (4) the development and applications of machine learning, radiative transfer modeling, or their hybrid; (5) the integration of remotely sensed plant traits to assess ecosystem functioning and services; (6) the application of remote sensing techniques for vegetation biotic and abiotic stress detection; and (7) remote sensing to advance nature-based solutions in agriculture and forestry for climate change mitigation. This session is inspired by the cost action program, Pan-European Network of Green Deal Agriculture and Forestry Earth Observation Science (PANGEOS, https://pangeos.eu/), which aims to leverage state-of-the-art remote sensing technologies to advance field phenotyping workflows, precision agriculture/forestry practices and larger-scale operational assessments for a more sustainable management of Europe’s natural resources.
This session covers climate predictions from seasonal to multi-decadal timescales and their applications. Continuing to improve such predictions is of major importance to society. The session embraces advances in our understanding of the origins of seasonal to decadal predictability and of the limitations of such predictions. This includes advances in improving forecast skill and reliability and making the most of this information by developing and evaluating new applications and climate services.
The session welcomes contributions from dynamical models, machine-learning or other statistical methods and hybrid approaches. It will investigate predictions of various climate phenomena, including extremes, from global to regional scales, and from seasonal to multi-decadal timescales (including seamless predictions). Physical processes and sources relevant to long-term predictability (e.g. ocean, cryosphere, or land) as well as predicting large-scale atmospheric circulation anomalies associated with teleconnections will be discussed. Analysis of predictions in a multi-model framework, and ensemble forecast initialization and generation will be another focus of the session. We are also interested in approaches addressing initialization shocks and drifts. The session welcomes work on innovative methods of quality assessment and verification of climate predictions. We also invite contributions on the use of seasonal-to-decadal predictions for risk assessment, adaptation and further applications.
Instrumentation and measurement technologies are currently playing a key role in the monitoring, assessment and protection of water resources.
This session focuses on measurement techniques, sensing methods and data science implications for the observation of water systems, emphasizing the strong link between measurement aspects and computational aspects characterising the water sector.
This session aims at providing an updated framework of the observational techniques, data processing approaches and sensing technologies for water management and protection, giving attention to today’s data science aspects, e.g. data analytics, big data, cloud computing and Artificial Intelligence.
Building a community around instrumentation & measurements for water systems is one of the aims of the session. In particular, participants to the EGU2020 edition of this session contributed to this book: A. Di Mauro, A. Scozzari & F. Soldovieri (eds.), Instrumentation and Measurement Technologies for Water Cycle Management, Springer Water, ISBN: 978-3-031-08261-0, 2022.
We welcome contributions about field measurement approaches, development of new sensing techniques, low cost sensor systems and measurement methods enabling crowdsourced data collection.
Therefore, water quantity and quality measurements as well as water characterization techniques are within the scope of this session. Remote sensing techniques for the monitoring of water resources and/or the related infrastructures are also welcome. Contributions dealing with the integration of data from multiple sources are solicited, as well as the design of ICT architectures (including IoT concepts) and of computing systems for the user-friendly monitoring of the water resource and the related networks.
Studies about signal and data processing techniques (including machine learning) and the integration between sensor networks and large data systems are also very encouraged.
Continuous monitoring of natural physical processes is crucial for understanding their behaviour. The variety of instruments available enhances data collection, aiding in the comprehension of these processes. Long-term data collection reveals trends and patterns, such as seasonal variations, multi-year cycles, and anthropogenic impacts (e.g., deforestation, urbanization, pollution). Conversely, short-term monitoring is vital for real-time decision-making, improving hazard assessment, risk management, and warning systems. Effective data analysis and innovative instrumentation contribute to developing mitigation and adaptation strategies. This session highlights the application of geosciences and geophysical instrumentation, including sensors in natural and laboratory environments, for monitoring natural phenomena and utilizing data systems to study these processes.
The session disseminates advanced research on natural physical processes and the use of scientific principles to address future challenges, including extreme climatic conditions. It encourages novel, interdisciplinary approaches to monitoring, aiming to establish historical baselines. This session seeks to bridge scientific knowledge and technological advancements to improve monitoring and understanding of natural physical processes. The session is inter- and transdisciplinary (ITS), covering topics such as:
1. Destructive and Non-Destructive Sensing Techniques, including contactless and remote sensing methodologies.
2. Monitoring System Developments for understanding hydro-meteorological processes, glaciers, soil erosion, settlements, liquefaction, landslides, earthquakes, volcanic events, and wildfires.
3. Real-Time Monitoring Systems, integrating geoscience data with Building Information Modelling (BIM), digital twins, robotic monitoring, and automation for improved decision-making.
4. Advances in Data Systems for efficient real-time monitoring and processing of large data volumes using Cloud Data Platforms, Distributed and Scalable Data Systems, Real-Time Data Processing, AI, Machine Learning, Data Privacy and Security, and Edge Computing.
5. Storage Technologies and Data Integration, including advancements in Graph Databases, Data Interoperability, and Multi-Model Databases.
6. Intelligent data analysis approaches to analyse accurate and precise interpretation of big data sets driven by various technologies.
This session welcomes cross-cutting advances in theoretical, methodological and applied studies at the synergistic interface among physical, analytical, information-theoretic, kinematic-geometric, machine learning, artificial and systems intelligence approaches to complex system dynamics, hazards and predictability across Hydrology and broader Earth System Sciences.
Special focus is given to unveil complex system dynamics, regimes, transitions, extremes, hazards and their interactions, along with their physical understanding, predictability and uncertainty, across multiple spatiotemporal scales.
The session encourages discussion on interdisciplinary physical and data-based approaches to system dynamics across Hydrology and broader Geosciences, ranging from novel advances in stochastic, computational, information-theoretic and dynamical system analysis, to cross-cutting emerging pathways in information physics, artificial and systems intelligence with process understanding in mind.
The session further encompasses practical aspects of working with systems intelligence and emerging technological approaches for strengthening systems analytics, causal discovery, model design and evaluation, predictability and uncertainty analysis, along with geophysical automated learning, model design, prediction and decision support.
Take part in a thrilling session exploring and discussing promising avenues in system dynamics and information discovery, quantification, modelling and interpretation, where methodological ingenuity and natural process understanding come together to shed light onto fundamental theoretical aspects to build innovative methodologies to tackle real-world challenges facing our planet.
Remote sensing products have a high potential to contribute to monitoring and modelling of water resources. Nevertheless, their use by water managers is still limited due to lack of quality, resolution, trust, accessibility, or experience.
In this session, we look for new developments that support the use of remote sensing data for water management applications from local to global scales. We are looking for research to increase the quality of remote sensing products, such as higher spatial and/or temporal resolution mapping of land use and/or agricultural practices or improved assessments of river discharge, lake and reservoir volumes, groundwater resources, drought monitoring/modelling and its impact on water-stressed vegetation, as well as on irrigation volumes monitoring and modelling. We are interested in quality assessment of remote sensing products through uncertainty analysis or evaluations using alternative sources of data. We also welcome contributions using a combination of different techniques (physically based models or artificial intelligence techniques) or a combination of different sources of data (remote sensing and in situ) and different imagery types (satellite, airborne, drone). Finally, we wish to attract presentations on developments of user-friendly platforms (as open as possible), providing smooth access to remote sensing data for water applications.
We are particularly interested in applications of remote sensing to determine the human water interactions and the climate change impacts on the whole water cycle (including the inland and coastal links).
The socio-economic impacts associated with floods are increasing. Floods represent the most frequent and most impacting, in terms of the number of people affected, among the weather-related disasters: nearly 0.8 billion people were affected by inundations in the last decade, while the overall economic damage is estimated to be more than $300 billion.
In this context, remote sensing represents a valuable source of data and observations that may alleviate the decline in field surveys and gauging stations, especially in remote areas and developing countries. The implementation of remotely-sensed variables (such as digital elevation model, river width, flood extent, water level, flow velocities, land cover, etc.) in hydraulic modelling promises to considerably improve our process understanding and prediction. During the last decades, an increasing amount of research has been undertaken to better exploit the potential of current and future satellite observations, from both government-funded and commercial missions, as well as many datasets from airborne sensors carried on airplanes and drones. In particular, in recent years, the scientific community has shown how remotely sensed variables have the potential to play a key role in the calibration and validation of hydraulic models, as well as provide a breakthrough in real-time flood monitoring applications. With the proliferation of open data and models in Earth observation with higher data volumes than ever before, combined with the exponential growth in deep learning, this progress is expected to rapidly increase.
We invite presentations related to flood monitoring and mapping through remotely sensed data including but not limited to:
- Remote sensing data for flood hazard and risk mapping, including commercial satellite missions as well as airborne sensors (aircraft and drones);
- Remote sensing techniques to monitor flood dynamics;
- The use of remotely sensed data for the calibration, or validation, of hydrological or hydraulic models;
- Data assimilation of remotely sensed data into hydrological and hydraulic models;
- Improvement of river discretization and monitoring based on Earth observations;
- River flow estimation from remote sensing;
- Deep learning based flood monitoring or prediction
Early career and underrepresented scientists are particularly encouraged to participate.
Landslides and slope instabilities are natural hazards that cause significant damage and loss of life around the world each year. Yet, their triggering mechanisms are still an open area of research. Landslide-prone areas are characterized by highly heterogeneous properties and subsurface dynamics, where, for example, changes in the fluid pathways, geomechanical parameters, and subsurface structures can occur over time-scales ranging from second/minutes to months/years, requiring radically different approaches for their identification and prediction. Hence, there is a need to develop a suite of novel methods for studying landslide and slope instabilities architecture as well as their temporal and spatial changes in internal structure and properties. The complexity of the problem requires the application of innovative research methods in data acquisition, methodology and the integrated interpretation of geophysical, geotechnical and geological data.
This session invites abstracts presenting novel and emerging trends and opportunities for landslide and slope instabilities reconnaissance, monitoring, and early-warning, particularly applying multi-method approaches. Presentations showing the integration of various geophysical and remote sensing techniques, especially using machine learning or time-lapse surveys, are especially welcome. Likewise, presentations focusing on determining the geomechanical parameters of mass movements using geological (boreholes, geomechanical or other surveys) and geophysical studies are also in the scope of the session. Since slope instabilities are a cross-disciplinary problem, any contributions on avalanches, natural or engineered slopes, or climate-induced slope instabilities are warmly invited.
Effective and enhanced hydrological monitoring is essential for understanding water-related processes in our rapidly changing world. Image-based river monitoring has proven to be a powerful tool, significantly improving data collection, analysis, and accuracy, while supporting timely decision-making. The integration of remote and proximal sensing technologies with citizen science and artificial intelligence has the potential to revolutionize monitoring practices. To advance this field, it is vital to assess the quality of current research and ongoing initiatives, identifying future trajectories for continued innovation.
We invite submissions focused on hydrological monitoring utilizing advanced technologies, such as remote sensing, AI, machine learning, Unmanned Aerial Systems (UAS), and various camera systems, in combination with citizen science. Topics of interest include, but are not limited to:
• Disruptive and Innovative sensors and technologies in hydrology.
• Advancing opportunistic sensing strategies in hydrology.
• Automated and semi-automated methods.
• Extraction and processing of water quality and river health parameters (e.g., turbidity, plastic transport, water depth, flow velocity).
• New approaches to long-term river monitoring (e.g., citizen science, camera systems—RGB/multispectral/hyperspectral, sensors, image processing, machine learning, data fusion).
• Innovative citizen science and crowd-based methods for monitoring hydrological extremes.
• Novel strategies to enhance the detail and accuracy of observations in remote areas or specific contexts.
The goal of this session is to bring together scientists working to advance hydrological monitoring, fostering a discussion on how to scale these innovations to larger applications.
This session is co-sponsored by MOXXI, the working group on novel observational methods of the IAHS.
Critical zones (CZ) are natural and anthropogenic environments where air, water, soil, and rock interact in complex ways with ecosystems and society. Groundwater is the largest reservoir in this integrated system, but it is often overlooked due to the challenges of accessing it and its slower movement compared to other CZ components. However, dedicated CZ observatories (e.g. eLTER and CZEN) and intensively instrumented study areas provide extensive and detailed data on groundwater flow under contrasting climates, geology, vegetation and land use, and offer the opportunity for comprehensive multi-site studies.. This session aims to showcase contributions that highlight such studies, which enhance our understanding of water fluxes within the critical zone and their crucial role in energy and material cycles.
We invite presentations that address key research questions, such as (i) How do components across different scales—from the vertical column, including the atmosphere, vegetation, soil, and bedrock, to large-scale hydrosystems, spanning headwaters and 2D hillslopes, and from surface waters and the vadose zone to the deeper limits of groundwater—interact and interconnect?, (ii) How can we bridge the gap between rapid subsurface and slow groundwater flow processes with longer-term environmental changes that collectively shape the critical zone? (iii) What are the potential consequences of climate warming, extreme weather events, and wildfires on groundwater recharge, discharge processes, and water quality?
This session aims to bring together researchers and scientists from diverse backgrounds to advance our understanding of groundwater’s role in the critical zone. We seek to illustrate how combining observations and numerical experiments can help delineate future predictions for groundwater systems under various climate and land-use evolution scenarios.
Sitting under a tree, you feel the spark of an idea, and suddenly everything falls into place. The following days and tests confirm: you have made a magnificent discovery — so the classical story of scientific genius goes…
But science as a human activity is error-prone, and might be more adequately described as "trial and error", or as a process of successful "tinkering" (Knorr, 1979). Thus we want to turn the story around, and ask you to share 1) those ideas that seemed magnificent but turned out not to be, and 2) the errors, bugs, and mistakes in your work that made the scientific road bumpy. What ideas were torn down or did not work, and what concepts survived in the ashes or were robust despite errors? We explicitly solicit Blunders, Unexpected Glitches, and Surprises (BUGS) from modeling and field or lab experiments and from all disciplines of the Geosciences.
Handling mistakes and setbacks is a key skill of scientists. Yet, we publish only those parts of our research that did work. That is also because a study may have better chances to be accepted for publication in the scientific literature if it confirms an accepted theory or if it reaches a positive result (publication bias). Conversely, the cases that fail in their test of a new method or idea often end up in a drawer (which is why publication bias is also sometimes called the "file drawer effect"). This is potentially a waste of time and resources within our community as other scientists may set about testing the same idea or model setup without being aware of previous failed attempts.
In the spirit of open science, we want to bring the BUGS out of the drawers and into the spotlight. In a friendly atmosphere, we will learn from each others' mistakes, understand the impact of errors and abandoned paths onto our work, and generate new insights for our science or scientific practice.
Here are some ideas for contributions that we would love to see:
- Ideas that sounded good at first, but turned out to not work.
- Results that presented themselves as great in the first place but turned out to be caused by a bug or measurement error.
- Errors and slip-ups that resulted in insights.
- Failed experiments and negative results.
- Obstacles and dead ends you found and would like to warn others about.
--
Knorr, Karin D. “Tinkering toward Success: Prelude to a Theory of Scientific Practice.” Theory and Society 8, no. 3 (1979): 347–76.
Solicited authors:
Jan Seibert
Co-organized by BG0/EMRP1/ESSI4/GD10/GI1/GI6/GM11/GMVP1/PS0/SM2/SSS11/ST4
Visualisation of scientific data is an integral part of scientific understanding and communication. Scientists have to make decisions about the most effective way to communicate their results every day. How do we best visualise the data to understand it ourselves? How do we best visualise our results to communicate with others? Common pitfalls can be overcrowding, overcomplicated or suboptimal plot types, or inaccessible colour schemes. Scientists may also get overwhelmed by the graphics requirements of different publishers, for presentations, posters, etc. This short course is designed to help scientists improve their data visualisation skills so that the research outputs would be more accessible within their own scientific community and reach a wider audience.
Topics discussed include:
- golden rules of DataViz;
- choosing the most appropriate plot type and designing a good DataViz;
- graphical elements, fonts and layout;
- colour schemes, accessibility and inclusiveness;
- creativity vs simplicity – finding the right balance;
- figures for scientific journals (graphical requirements, rights and permissions);
- tools for effective data visualisation.
This course is co-organized by the Young Hydrologic Society (YHS), enabling networking and skill enhancement of early career researchers worldwide. Our goal is to help you make your figures more accessible to a wider audience, informative and beautiful. If you feel your graphs could be improved, we welcome you to join this short course.
Environmental DNA (eDNA) metabarcoding is a noninvasive method to detect biodiversity in a variety of environments that has many exciting applications for geosciences. In this short course, we introduce eDNA metabarcoding to a geoscience audience and present potential research applications.
During the past 75 years, radiocarbon dating has been applied across a wide range of disciplines, including, e.g. archaeology, geology, hydrology, geophysics, atmospheric science, oceanography, and paleoclimatology, to name but a few. Radiocarbon analysis is extensively used in environmental research as a chronometer (geochronology) or as a tracer for carbon sources and natural pathways. In the last two decades, advances in accelerator mass spectrometry (AMS) have enabled the analysis of very small quantities, as small as tens of micrograms of carbon. This has opened new possibilities, such as dating specific compounds (biomarkers) in sediments and soils. Other innovative applications include distinguishing between old (fossil) and natural (biogenic) carbon or detecting illegal trafficking of wildlife products such as ivory, tortoiseshells, and fur skins. Despite the wide range of applications, archives, and systems studied with the help of radiocarbon dating, the method has a standard workflow, starting from sampling through the preparation and analysis, arriving at the final data that require potential reservoir corrections and calibration.
This short course will provide an overview of radiocarbon dating, highlighting the state-of-the-art methods and their potential in environmental research, particularly in paleoclimatology. After a brief introduction to the method, participants will delve into practical examples of its application in the study of past climates, focusing on the 14C method and how we arrive at the radiocarbon age.
Applications in paleoclimate research and other environmental fields
Sampling and preparation
Calibration programs
We strongly encourage discussions around radiocarbon research and will actively address problems related to sampling and calibration. This collaborative approach will enhance the understanding and application of radiocarbon dating in the respective fields.
Earth and Space Sciences Informatics (ESSI) is concerned with evolving issues of data management and analysis, technologies and methodologies, large-scale computational experimentation and modelling, and hardware and software infrastructure needs. Together, these elements provide us with the capability to change data into knowledge that can be applied to advance our understanding of the Earth and Space Sciences. This session is for presentations on all aspects of Earth and Space Science Informatics, especially those topics not represented by other ESSI sessions.
Please use the buttons below to download the or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.
We are sorry, but presentations are only available for conference attendees. Please register for the conference first. Thank you.
Please decide on your access
Please use the buttons below to download the or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.