The ESSI Medal and Awards session celebrates this year's awardees for the Ian McHarg Medal and the ESSI Division Outstanding ECS Award. The Ian McHarg Medal Lecture will be given by François Robida, and the ESSI Division Outstanding ECS Award Lecture will be given by Anirudh Prabhu.
Speakers
Anirudh Prabhu, Carnegie Institution for Science, United States of America
Python is one of the fastest growing programming languages and has moved to the forefront in the earth system sciences (ESS), due to its usability, the applicability to a range of different data sources and, last but not least, the development of a considerable number ESS-friendly and ESS-specific packages.
This interactive Python course is aimed at ESS researchers who are interested in adding a new programming language to their repertoire. Except for some understanding of fundamental programming concepts (e.g. loops, conditions, functions etc.), this course presumes no previous knowledge of and experience in Python programming.
The goal of this course is to give the participants an introduction to the Python fundamentals and an overview of a selection of the most widely-used packages in ESS. The applicability of those packages ranges from (simple to advanced) number crunching (e.g. Numpy), to data analysis (e.g. Xarray, Pandas) to data visualization (e.g. Matplotlib).
The course will be grouped into different sections, based on topics discussed, packages introduced and field of application. Furthermore, each section will have an introduction to the main concepts e.g. fundamentals of a specific package and an interactive problem-set part.
This course welcomes active participation in terms of both on-site/virtual discussion and coding. To achieve this goal, the i) course curriculum and material will be provided in the form of Jupyter Notebooks ii) where the participants will have the opportunity to code up the iii) solutions to multiple problem sets and iv) have a pre-written working solution readily available. In these interactive sections of the course, participants are invited to try out the newly acquired skills and code up potentially different working solutions.
We very much encourage everyone who is interested in career development, data analysis and learning a new programming to join our course.
Foundation Models (FM) represent the next frontier in Artificial Intelligence (AI). These generalized AI models are designed not just for specific tasks but for a plethora of downstream applications. Trained on any sequence data through self-supervised methods, FMs eliminate the need for extensive labeled datasets. Leveraging the power of transformer architectures, which utilize self-attention mechanisms, FMs can capture intricate relationships in data across space and time. Their emergent properties, derived from the data, make them invaluable tools for scientific research. When fine-tuned, FMs outperform traditional models, both in efficiency and accuracy, paving the way for rapid development of diverse applications. FMs, with their ability to synthesize vast amounts of data and discern intricate patterns, can revolutionize our understanding of and response to challenging global problems, such as monitoring and mitigating the impacts of climate change and other natural hazards.
The session will discuss advances, early results and best practices related to the preparation and provisioning of curated data, construction and evaluation of model architectures, scaling properties and computational characteristics of model pretraining, use cases and finetuning of downstream applications, and MLops for the deployment of models for research and applications. The session also encourages discussion on broad community involvement toward the development of open foundation models for science that are accessible for all.
Modern challenges of climate change, disaster management, public health and safety, resources management, and logistics can only be addressed through big data analytics. A variety of modern technologies are generating massive volumes of conventional and non-conventional geospatial data at local and global scales. Most of this data includes geospatial data components and is analysed using spatial algorithms. Ignoring the geospatial component of big data can lead to an inappropriate interpretation of extracted information. This gap has been recognised and led to the development of new spatiotemporally aware strategies and methods.
This session discusses advances in spatiotemporal machine learning methods and the software and infrastructures to support them.
Geospatial artificial intelligence (GeoAI) has gained popularity for creating maps, performing analyses, and developing geospatial applications that are national, international or global in scope, thanks to its capacity to process and understand large geospatial data and infer valuable patterns and information. Rapid geo-information updates, public safety improvement, smart city developments, green deal transition as well as climate change mitigation and adaptation are among the problems that can now be studied and addressed using GeoAI.
Along with the acceleration of GeoAI adoption, a new set of implementation challenges is ascending to the top of the agenda for leaders in mapping technologies. These challenges relate to “operationalizing large GeoAI systems”, including automating the AI lifecycle, tracking and adapting models to new contexts and landscapes, temporal and spatial upscaling of models, improving explainability, balancing cost and performance, creating resilient and future-proof AI and IT operations, and managing activities across Cloud and on-premise environments.
This session aims to provide a venue to present the latest applications of GeoAI for mapping at national, international and global scales as well as their operationalization challenges. The themes of the session include, but are not limited to:
· Requirements of GeoAI methods for national mapping agencies, their relationship with industrial/commercial stakeholders, and the role of national agencies in establishing GeoAI standards.
· GeoAI interoperability and research translation.
· Extracting core geospatial layers and enhancing national basemaps from multi-scale, multi-modal remote-sensing data sources.
· Large-scale point cloud analysis for use cases in infrastructure development, urban planning, forest inventory, energy consumption/generation modeling, and natural resources management.
· Measuring rates and trends of changes in landscape patterns and processes such as land-cover/land-use change detection and disaster damage proxy mapping.
· Modernizing national archives, including geo-referencing, multi-temporal co-registration, super-resolution, colorization, and analysis of historical air photos.
The recent growing number of probes in the heliosphere and future missions in preparation led to the current decade being labelled as "the golden age of heliophysics research". With more viewpoints and data downstreamed to Earth, machine learning (ML) has become a precious tool for planetary and heliospheric research to process the increasing amount of data and help the discovery and modelisation of physical systems. Recent years have also seen the development of novel approaches leveraging complex data representations with highly parameterised machine learning models and combining them with well-defined and understood physical models. These advancements in ML with physical insights or physically informed neural networks inspire new questions about how each field can respectively help develop the other. To better understand this intersection between data-driven learning approaches and physical models in planetary sciences and heliophysics, we seek to bring ML researchers and physical scientists together as part of this session and stimulate the interaction of both fields by presenting state-of-the-art approaches and cross-disciplinary visions of the field.
The "ML for Planetary Sciences and Heliophysics" session aims to provide an inclusive and cutting-edge space for discussions and exchanges at the intersection of machine learning, planetary and heliophysics topics. This space covers (1) the application of machine learning/deep learning to space research, (2) novel datasets and statistical data analysis methods over large data corpora, and (3) new approaches combining learning-based with physics-based to bring an understanding of the new AI-powered science and the resulting advancements in heliophysics research.
Topics of interest include all aspects of ML and heliophysics, including, but not limited to: space weather forecasting, computer vision systems applied to space data, time-series analysis of dynamical systems, new machine learning models and data-assimilation techniques, and physically informed models.
Deep Learning has seen accelerated adoption across Hydrology and the broader Earth Sciences. This session highlights the continued integration of deep learning and its many variants into traditional and emerging hydrology-related workflows. We welcome abstracts related to novel theory development, new methodologies, or practical applications of deep learning in hydrological modeling and process understanding. This might include, but is not limited to, the following:
(1) Development of novel deep learning models or modeling workflows.
(2) Probing, exploring and improving our understanding of the (internal) states/representations of deep learning models to improve models and/or gain system insights.
(3) Understanding the reliability of deep learning, e.g., under non-stationarity and climate change.
(4) Modeling human behavior and impacts on the hydrological cycle.
(5) Deep Learning approaches for extreme event analysis, detection, and mitigation.
(6) Natural Language Processing in support of models and/or modeling workflows.
(7) Uncertainty estimation for and with Deep Learning.
(8) Applications of Large Language Models (e.g. ChatGPT, Bard, etc.) in the context of hydrology.
(9) Advances towards foundational models in the context of hydrology and Earth Sciences more generally.
(10) Exploration of different optimization strategies, such as self-supervised learning, unsupervised learning, and reinforcement learning.
Proper characterization of uncertainty remains a major research and operational challenge in Environmental Sciences and is inherent to many aspects of modelling impacting model structure development; parameter estimation; an adequate representation of the data (inputs data and data used to evaluate the models); initial and boundary conditions; and hypothesis testing. To address this challenge, methods that have proved to be very helpful include a) uncertainty analysis (UA) that seeks to identify, quantify and reduce the different sources of uncertainty, as well as propagating them through the model, and b) the closely-related methods for sensitivity analysis (SA) that evaluate the role and significance of uncertain factors in the functioning of systems/models.
This session invites contributions that discuss advances, both in theory and/or application, in (Bayesian) UA methods and methods for SA applicable to all Earth and Environmental Systems Models (EESMs), which embrace all areas of hydrology, such as classical hydrology, subsurface hydrology and soil science.
Topics of interest include (but are not limited to):
1) Novel methods for effective characterization of sensitivity and uncertainty
2) Novel methods for spatial and temporal evaluation/analysis of models
3) Novel approaches and benchmarking efforts for parameter estimation
4) Improving the computational efficiency of SA/UA (efficient sampling, surrogate modelling, parallel computing, model pre-emption, model ensembles, etc.)
5) The role of information and error on SA/UA (e.g., input/output data error, model structure error, parametric error, regionalization error in environments with no data etc.)
6) Methods for evaluating model consistency and reliability as well as detecting and characterizing model inadequacy
7) Analyses of over-parameterised models enabled by AI/ML techniques
8) Robust quantification of predictive uncertainty for model surrogates and machine learning (ML) models
9) Approaches to define meaningful priors for ML techniques in hydro(geo)logy
The invited speaker of this session is Francesca Pianosi (University of Bristol).
The complexity of hydrological systems poses significant challenges to their prediction and understanding capabilities. The rise of machine learning (ML) provides powerful tools for modeling these intricate systems. However, realizing their full potential in this field is not just about algorithms and data, but requires a cooperative interaction between domain knowledge and data-driven power. This session aims to explore the frontier of this convergence, examining how prior understanding of hydrological and land surface processes or causal representations can be incorporated into data-driven models, and conversely, how ML might enrich our causal or physical understanding of system dynamics and mechanisms.
We invite researchers working at the intersection of explainable ML/AI and hydrological or Earth system sciences to share their methods, results, and insights. Submissions are welcome on topics including, but not limited to:
- Explainability and transparency in ML/AI modeling of hydrological and Earth systems;
- Integration of hydrological processes and knowledge in ML/AI models;
- Multiscale and multiphysics representation in ML/AI models;
- Causal representation learning in hydrological and earth systems;
- Strategies for balancing model performance and interpretability;
- Leveraging insights from data science and XAI to deepen physical understanding;
- Data-driven approaches to causal analysis in hydrological and Earth systems;
- Challenges, limitations, and solutions related to hybrid models and XAI.
This groundbreaking session merges the forefront of digital technology and explainable artificial intelligence (XAI) to redefine our approach to natural hazard management and resilience. As natural hazards such as earthquakes, floods, landslides, and wildfires become more frequent and severe, leveraging advanced digital solutions is crucial. This session delves into the synergistic application of remote sensing, machine learning, geographic information systems (GIS), IoT, quantum computing, digital twins, and VR/AR in understanding, predicting, and managing natural disasters.
We place a special emphasis on the role of eXplainable AI (XAI) in demystifying AI-driven predictive models. By exploring algorithms like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), we aim to make AI predictions in natural hazard assessment transparent and trustworthy. This approach not only enhances the predictive accuracy but also fosters trust and understanding among stakeholders.
Attendees will gain insights into cutting-edge research and practical applications, showcasing how these integrated technologies enable real-time monitoring, early warning systems, and effective communication strategies for disaster management. The session will feature case studies highlighting the successful application of these technologies in diverse geographic regions and hazard scenarios. This interdisciplinary platform is dedicated to advancing our capabilities in mitigating the risks and impacts of natural hazards, paving the way for safer, more resilient communities in the face of increasing environmental challenges.
Time series are a very common type of data sets generated by observational and modeling efforts across all fields of Earth, environmental and space sciences. The characteristics of such time series may however vastly differ from one another between different applications – short vs. long, linear vs. nonlinear, univariate vs. multivariate, single- vs. multi-scale, etc., equally calling for specifically tailored methodologies as well as generalist approaches. Similarly, also the specific task of time series analysis may span a vast body of problems, including
- dimensionality/complexity reduction and identification of statistically and/or dynamically meaningful modes of (co-)variability,
- statistical and/or dynamical modeling of time series using stochastic or deterministic time series models or empirical components derived from the data,
- characterization of variability patterns in time and/or frequency domain,
- quantification various aspects of time series complexity and predictability,
- identification and quantification of different flavors of statistical interdependencies within and between time series, and
- discrimination between mere correlation and true causality among two or more time series.
According to this broad range of potential analysis goals, there exists a continuously expanding plethora of time series analysis concepts, many of which are only known to domain experts and have hardly found applications beyond narrow fields despite being potentially relevant for others, too.
Given the broad relevance and rather heterogeneous application of time series analysis methods across disciplines, this session shall serve as a knowledge incubator fostering cross-disciplinary knowledge transfer and corresponding cross-fertilization among the different disciplines gathering at the EGU General Assembly. We equally solicit contributions on methodological developments and theoretical studies of different methodologies as well as applications and case studies highlighting the potentials as well as limitations of different techniques across all fields of Earth, environmental and space sciences and beyond.
Co-organized by BG2/CL5/EMRP2/ESSI1/G1/GI2/HS13/SM3/ST2
The world is witnessing a transformation of long-held paradigms in the face of unprecedented grand environmental and social challenges. These complex, interconnected issues demand collaborative, innovative, and data-driven approaches. International scientific infrastructures play a pivotal role in advancing research on these challenges by facilitating data sharing, promoting FAIR (Findable, Accessible, Interoperable, and Reusable) data principles, and upholding CARE (Collective Benefit, Authority to Control, Responsibility, and Ethics) principles. This session invites abstracts from scientists, developers, and decision-makers to explore how international scientific infrastructures are shaping the future of research and decision-making in the geosciences and beyond.
We invite research and insights into the role and progress of AI/ML, open science, FAIR principles, governance, collaborative research, and ethical data sharing as applied to climate research and modeling, dynamic satellite mapping of the Earth's surface, 3D/4D mapping of the subsurface, early warning systems, water security, capacity building, and the evaluation of impact.
Workflow methodologies and systems are fundamental tools for scientific experimentation, especially when complex computational systems such as distributed or high-performance computing are required, to improve scientific productivity and meet criteria essential for reproducibility and provenance of results.
Recent advances and upcoming developments in Earth System Science (ESS) are facing the challenge of having to i) efficiently handle close-to exascale data amounts and ii) providing methods to make the information content readily accessible and usable by both scientists and downstream communities.
Concurrently, awareness of the importance of the reproducibility and replicability of research results has increased considerably in recent years. Reproducibility refers to the possibility of independently arriving at the same scientific conclusions. Replicability or replication, is achieved if the execution of a scientific workflow arrives at the same result as before.
A sensible orchestration of these two aspects requires the application of seamless workflow tools employed at compute and data infrastructures which also enable the capture of required provenance information to - in an extreme case - rerun large-simulations and analysis routines to provide trust in model fidelity, data integrity and decision-making processes. Here, reproducibility, or even replicability, dedication to Open Science and FAIR data principles are key. Further, this enables communities of practice to establish best practices in applying future-proof workflows among a critical mass of users, thereby facilitating adoption.
This session discusses latest advances in workflow techniques for ESS in a two-tiered organizational structure, focusing on:
- sharing use cases, best practices and progress from various initiatives that improve different aspects of these technologies, such as eFlows4HPC (Enabling dynamic and Intelligent workflows in the future EuroHPC ecosystem), Climate Digital Twin (Destination Earth), or EDITO (European Digital Twin Ocean) Model-Lab;
-current approaches, concepts and developments in the area of reproducible workflows in ESS, like requirements for reproducibility and replicability including provenance tracking; technological and methodological components required for data reusability and future-proof research workflows; FAIR Digital Objects (FDOs); (meta)data standards, linked-data approaches, virtual research environments and Open Science principles.
Advances in computational capacities, technologies in modelling and information systems and increasing availability of observational data have given rise to ideas to apply the concept of digital twins to the Earth system and its components. Different projects or initiatives are now working to develop information systems that not only employ and advance state-of-the-art simulation systems, but also allow their users to interact with these systems more directly, e.g. by configuring or initiating simulations, coupling models with additional data streams and workflows, visualizing simulations interactively. Applications in a wide variety of sectors stand to benefit from these developments, including disaster risk management, climate adaptation, agriculture and forestry, renewable energy, public health management, and others.
This session invites contributions on current developments in Digital Twin initiatives. These may focus on technology developments and challenges, data management, interactivity tools, or application demonstrations.
Research data infrastructures (RDIs) serve to manage and share research products in a systematic way to enable research across all scales and disciplinary boundaries Their services support researchers in data management and collaborative analysis throughout the entire data lifecycle.
For this fostering of FAIRness and openness, e.g. by applying established standards for metadata, data, and/or scientific workflows, is crucial. Through their offerings and services, RDIs can shape research practices and are strongly connected with the communities of users that identify and associate themselves with them.
Naturally, the potential of RDIs faces many challenges. Even though it is clear that RDIs are indispensable for solving big societal problems, their wide adoption requires a cultural change within research communities. At the same time RDIs themselves must be developed further to serve user needs. And, also at the same time, the sustainability of RDIs must be improved, international cooperation increased, and duplication of development efforts must be avoided. To be able to provide a community of diverse career stages and backgrounds with a convincing infrastructure that is established beyond national and institutional boundaries, new collaboration patterns and funding approaches must be tested so that RDIs foster cultural change in academia and be a reliable foundation for FAIR and open research. This needs to happen while academia struggles with improving researcher evaluation, with a continuing digital disruption, with enhancing scholarly communication, and with diversity, equity, and inclusion.
In Earth System Science (ESS), several research data infrastructures and components are currently developed on different regional and disciplinary scales , all of which face these challenges at some level. solutions
This session provides a forum to exchange methods, stories, and ideas to enable cultural change and international collaboration in scientific communities, to bridge the gap between user needs, and to build sustainable software solutions.
Cloud computing has emerged as the dominant paradigm, supporting practically all industrial applications and a significant number of academic and research projects. Since its inception and subsequent widespread adoption, migrating to cloud computing has presented a substantial challenge for numerous organisations and enterprises. Leveraging cloud technologies to process big data in proximity to their physical location represents an ideal use case. These cloud resources provide the requisite infrastructure and tools, especially when accompanied by high-performance computing (HPC) capabilities.
Pangeo (pangeo.io) is a global community of researchers and developers that tackle big geoscience data challenges in a collaborative manner using HPC and Cloud infrastructure. This session's aim is threefold:
(1) Focuses on Cloud/Fog/Edge computing use cases and aims to identify the status and the steps towards a wider cloud computing adoption in Earth Observation and Earth System Modelling.
(2) to motivate researchers who are using or developing in the Pangeo ecosystem to share their endeavours with a broader community that can benefit from these new tools.
(3) to contribute to the Pangeo community in terms of potential new applications for the Pangeo ecosystem, containing the following core packages: Xarray, Iris, Dask, Jupyter, Zarr, Kerchunk and Intake.
We warmly welcome contributions that detail various Cloud computing initiatives within the domains of Earth Observation and Earth System Modelling, including but not limited to:
- Cloud federations, scalability and interoperability initiatives across different domains, multi-provenance data, security, privacy and green and sustainable computing.
- Cloud applications, infrastructure and platforms (IaaS, PaaS SaaS and XaaS).
- Cloud-native AI/ML frameworks and tools for processing data.
- Operational systems on the cloud.
- Cloud computing and HPC convergence and workload unification for EO data processing.
Also, presentations using at least one of Pangeo’s core packages in any of the following domains:
- Atmosphere, Ocean and Land Models
- Satellite Observations
- Machine Learning
- And other related applications
We welcome any contributions in the above themes presented as science-based in other EGU sessions, but more focused on research, data management, software and/or infrastructure aspects. For instance, you can showcase your implementation through live executable notebooks.
As the urgency of addressing complex global challenges such as climate change, biodiversity loss, and sustainable resource management increases, the role of exploiting and producing high-quality, high-resolution geospatial information in these efforts is becoming increasingly crucial. Google Earth Engine has emerged as a powerful and well-established tool for harnessing the potential of these data products, reducing reliance on desktop and in-house computational platforms and providing researchers and developers with a compelling cloud-based alternative for planetary-scale geospatial analysis.
This session invites contributions from developers, researchers, and users providing cloud based solutions to key problems which push the boundaries of what is possible with Google Earth Engine. We welcome submissions focusing on, but not limited to:
Novel applications and case studies demonstrating the use of Earth Engine in addressing real-world problems
Generation and visualization of new databases and remote sensing products customized to specific applications
Development of new tools, extensions, or apps that enhance the functionality of Earth Engine
Methodological innovations in Earth Engine, including the implementation of advanced geospatial algorithms
Efforts to integrate Earth Engine with other data sources or analytical tools
Discussions on challenges, lessons learned, and future directions in the use of Earth Engine for geospatial analysis.
Whether you are using Earth Engine to map deforestation, predict flood risks, track disease spread, or develop new analytical tools, this session is your opportunity to share your work, learn from others, and explore the future of geospatial analysis with Google Earth Engine.
Data plays a crucial role in driving innovation and making informed decisions. The European strategy for data is a vision of data spaces that aims to foster creativity and open data, while also prioritizing personal data protection, consumer safeguards, and FAIR principles. Satellite imagery has transformative potential, but limitations of data size and access have previously constrained applications.
Additional thematic or geographical data spaces are being developed, such as the European sectorial data spaces, the Copernicus Data Space Ecosystem and Green Deal Data Space. These provide access to high-quality, interoperable data, through streamlined access, on-board processing and online visualization generating actionable knowledge and supporting more effective decision-making. These novel tools of this digital ecosystem create a vast range of opportunities for geoscience research, development and communication at local to global scale. Operational applications such as monitoring networks and early warning systems are built on top of these infrastructures, facilitating governance and sustainability in the face of global challenges. Worldwide satellite imagery data series can be accessed through API systems, creating analysis ready data for advanced machine learning applications. Put together, these advances in data availability, analysis tools and processing capacity are transformative for geoscience research. There is a growing demand for a deeper understanding of their design, establishment, integration, and evolution within the environmental and Earth sciences. As a geoscience community, it is imperative that we explore how data spaces can revolutionize our work and actively contribute to their development.
This session connects developers and users of the Copernicus Data Space Ecosystem and other European satellite data infrastructures and Data Spaces, showing how data spaces facilitate the sharing, integration, and flexible processing of environmental and Earth system data from diverse sources. The speakers will discuss how ongoing efforts to build data spaces will connect with existing initiatives on data sharing and processing, and present examples of innovative services that can be developed using data spacess. By leveraging these cutting-edge tools within the digital ecosystem, the geoscience community will gain access to a vast range of opportunities for research, development, and communication at local and global scales.
Earth System Models (ESMs) have evolved considerably in complexity, capability and scale as evidenced in projects such as the Coupled Model Intercomparison Project Phase 6 and the forthcoming CMIP7 project.
Coupled Earth system interactions such as feedbacks and potential abrupt changes are a significant source of uncertainty in our current understanding of the Earth system and how it might respond to future human interventions.
There is therefore a need to credibly assess such developments and capabilities for effective research on climate variability and change.
This session will examine physical, biogeochemical and biophysical processes likely to affect the evolution of the Earth system over the coming decades and centuries. Contributions with a focus on; (a) the latest advances in the representation of these couplings and interactions within state-of-the-art numerical models; (b) novel experimental designs to help improve quantification of these feedbacks, including those targeting CMIP7 activities and (c) novel approaches for benchmarking and evaluation of ESMs including cross-domain and process -based evaluation, observational uncertainties, science and performance metrics and benchmarks; are all particularly welcome.
This session arises from the joint initiative of the The CMIP7 Model Benchmarking Task Team, EU-funded ESM2025 and OptimESM projects.
In recent decades, the use of geoinformation technology has become increasingly important in understanding the Earth's environment. This session focuses on modern open-source software tools, including those built on top of commercial GIS solutions, developed to facilitate the analysis of mainly geospatial data in various branches of geosciences. Earth science research has become more collaborative with shared code and platforms, and this work is supported by Free and Open Source Software (FOSS) and shared virtual research infrastructures utilising cloud and high-performance computing. Contributions will showcase practical solutions and applications based on FOSS, cloud-based architecture, and high-performance computing to support information sharing, scientific collaboration, and large-scale data analytics. Additionally, the session will address the challenges of comprehensive evaluations of Earth Systems Science Prediction (ESSP) systems, such as numerical weather prediction, hydrologic prediction, and climate prediction and projection, which require large storage volumes and meaningful integration of observational data. Innovative methods in open frameworks and platforms will be discussed to enable meaningful and informative model evaluations and comparisons for many large Earth science applications from weather to climate to geo in the scope of Open Science 2.0.
In the Environmental and Solid Earth research fields, addressing complex scientific and societal challenges with holistic solutions within the dynamic landscape of data-driven science underscores the critical need for data standardisation, integration and interoperability. Just as humans communicate effectively to share insights, machines must seamlessly exchange data. The high-capacity computing services allow for the discovery and processing of large amounts of information, boosting the integration of data from different scientific domains and allowing environmental and solid Earth research to thrive on interdisciplinary collaboration and on the potential of big data.
As earth and environmental researchers, our expertise is essential in addressing natural and ecological problems, which extends to our engagement with operational infrastructures (the Environmental Research Infrastructures-ENVRIs, the European Open Science Cloud-EOSC, the EGI Federation, among others). Data repositories, e-service providers and other research or e-infrastructures support scientific development with interoperability frameworks and technical solutions, to effectively bridge the traditional boundaries between the disciplines, and enhance machine-to-machine (M2M) interactions, enabling data and service interoperation.
Join this session to explore real-world examples from earth and environmental scientists (from atmosphere, marine, ecosystems or solid earth), data product developers, data scientists and engineers. Whether you've navigated infrastructures, addressed data analytics, visualisation and access challenges, or embraced the transformative potential of digital twins. Whether you've gained expertise in data collection, quality control and processing, employed infrastructures to expedite your research, or participated in Virtual Access and/or Transnational Access programs to expand your horizons. We invite researchers with diverse expertise in data-driven research to showcase impactful scientific use cases and discuss interdisciplinary methodologies or propose best practices with successful interoperability frameworks. Join us as we explore ways to enhance the FAIRness of earth and environmental data, fostering open science within and beyond our fields.
The United Nations (UN) 2030 Agenda for Sustainable Development set a milestone in the evolution of society's efforts towards sustainable development which must combine social inclusion, economic growth, and environmental sustainability. The definition of the Sustainable Development Goals (SDGs) and the associated Global Indicator Framework represent a data-driven framework helping countries in evidence-based decision-making and development policies.
Earth observation (EO) data, including satellite and in-situ networks, and EO data analytics and machine learning plays a key role in assessing progress toward meeting the SDGs, since it can make the 2030 Agenda monitoring and reporting viable, technically and financially and be beneficial in making SDG indicators' monitoring and reporting comparable across countries.
This session invites contributions on how to make use of Earth Observations data to address SDG monitoring and reporting, in particular welcomes presentations about EO-driven scientific approaches, EO-based tools, and EO scientific initiative and projects to build, assess and monitor UN SDGs indicators.
Understanding Earth's natural processes, particularly in the context of global climate change, has gained widespread recognition as an urgent and central research priority that requires further exploration. Recent advancements in satellite technology, characterized by new platforms with high revisit times and the growing capabilities for collecting repetitive ultra-high-resolution aerial images through unmanned aerial vehicles (UAVs), have ushered in exciting opportunities for the scientific community. These developments pave the way for developing and applying innovative image-processing algorithms to address longstanding and emerging environmental challenges.
The primary objective of the proposed session is to convene scientific researchers dedicated to the field of satellite and aerial time-series imagery. The aim is to showcase ongoing research efforts and novel applications in this dynamic area. This session is specifically focused on presenting studies centred around the creation and utilization of pioneering algorithms for processing satellite time-series data, as well as their applications in various domains of remote sensing, aimed at investigating long-term processes across all Earth's realms, including the sea, ice, land, and atmosphere.
In today's era of unprecedented environmental challenges and the ever-increasing availability of data from satellite and aerial sources, this session serves as a platform to foster collaboration and knowledge exchange among experts working on the cutting edge of Earth observation technology. By harnessing the power of satellite and aerial time-series imagery, we can unlock valuable insights into our planet's complex systems, ultimately aiding our collective efforts to address pressing global issues such as climate change, natural resource management, disaster mitigation, and ecosystem preservation.
The session organizers welcome contributions from researchers engaged in applied and theoretical research. These contributions should emphasize fresh methods and innovative satellite and aerial time-series imagery applications across all geoscience disciplines. This inclusivity encompasses aerial and satellite platforms and the data they acquire across the electromagnetic spectrum.
Earth's dynamic and complex environmental systems are continually evolving, driven by various natural and anthropogenic factors. In an era marked by increasing hazards, dwindling natural resources, and the undeniable effects of climate change, harnessing the power of cutting-edge technologies to address these critical issues at the local level is needed. To monitor and understand these changes, scientists increasingly rely on time series remote sensing data and advanced artificial intelligence (AI) geospatial tools. The session aim will be to showcase the latest advancements in remote sensing technology and geospatial software that facilitate the monitoring and analyzing local hazards, natural resources, and climate change impacts. We will discuss case studies and research findings demonstrating the effectiveness of time series remote sensing data in addressing specific local challenges. The goal will be to foster interdisciplinary collaboration among researchers, policymakers, and practitioners to develop actionable strategies for addressing local issues.
The proposed session aims to bring together experts and researchers from diverse disciplines to explore innovative approaches and solutions using time series aerial and satellite remote sensing data combined with geospatial technology software. The session will delve into the practical applications of these technologies in understanding and mitigating local challenges related to hazards, natural resources, and climate change.
The conveners’ welcome contributions from interdisciplinary scientists, educators, innovators, policy makers and local stakeholders in applied and theoretical domains, emphasizing innovative methodologies and practical applications of time-series imagery from satellites and aerial sources to explore innovative solutions for tackling local challenges related to hazards, natural resources, and climate change. We encourage using data acquired across the electromagnetic spectrum (optical and SAR) worldwide via aerial and satellite platforms.
The visualization and user-friendly exploration of information from scientific data is one of the main tasks of good scientific practice. But steady increases in temporal and spatial resolutions of modeling and remote sensing approaches lead to ever-increasing data complexity and volumes. On the other hand, earth system science data are getting increasingly important as decision support for stakeholders and other end users far beyond the scientific domains.
This poses major challenges for the entire process chain, from data storage to web-based visualization. For example, (1) the data has to be enriched with metadata and made available via appropriate and efficient services; (2) visualization and exploration tools must then access the often decentralized tools via interfaces that are as standardized as possible; (3) the presentation of the essential information must be coordinated in co-design with the potential end users. This challenge is reflected by the active development of tools, interfaces and libraries for modern earth system science data visualization and exploration.
In this session, we hence aim to establish a transdisciplinary community of scientists, software-developers and other experts in the field of data visualization in order to give a state-of-the-art overview of tools, interfaces and best-practices. In particular, we look for contributions in the following fields:
- Developments of open-source visualization and exploration techniques for earth system science data
- Co-designed visualization solutions enabling transdisciplinary research and decision support for non-scientific stakeholders and end-users
- Tools and best-practices for visualizing complex, high-dimensional and high frequency data
- Services and interfaces for the distribution and presentation of metadata enriched earth system science data
- Data visualization and exploration solutions for decentralized research data infrastructures
All contributions should emphasize the usage of community-driven interfaces and open-source solutions and finally contribute to the FAIRification of products from earth system sciences.
Humans have been successfully mapping the remotest and most inhospitable places on Earth, and the surfaces and interiors of other planets and their moons at highest resolution. The remaining blank spots are located in areas that are hardly accessible either through field surveys, geophysical methods or remote sensing due to technical and/or financial challenges. Some of these places are key areas that would help to reveal geologic history, or provide access to future exploration endeavours.
Such extreme and remote locations are commonly associated with the ocean floor, or planetary surfaces, but these extreme worlds might also be found in hot deserts, under the ice, in high-mountain ranges, in volcanic edifices, hidden underneath dense canopy cover, or located within the near-surface crust. All such locations are prime targets for remote sensing mapping in a wider sense. The methodological and technical repertoire to investigate extreme and remote locations is thus highly specialized and despite different contexts there are commonalities not only with respect to technical mapping approaches, but also in the way how knowledge is gathered and assessed, interpreted and visualized regarding its scientific but also its economic value.
This session invites contributions to this field of geologic mapping and cartography of extreme (natural) environments with a focus on the scientific synthesis and extraction of information and knowledge.
A candidate contribution might cover, but is not limited to, topics such as:
- ocean mapping using manned and unmanned vehicles and devices,
- offshore exploration using remote sensing techniques,
- crustal investigation through drilling and sampling,
- subsurface investigation using radar techniques,
- planetary geologic and geophysical mapping,
- subglacial geologic mapping
- geologic investigation of desert environments.
The aim of this session is to bring together researchers mapping geologically and geophysically inaccessible environments, thus relying on geophysical and remote sensing techniques as single source for collecting data and information. We would like to keep the focus on geologic and geophysical mapping of spots for which we have no or only very limited knowledge due to the harsh environmental conditions, and we would thus exclude areas that are inaccessible for political reasons.
Remote sensing products have a high potential to contribute to monitoring and modelling of water resources. Nevertheless, their use by water managers is still limited due to lack of quality, resolution, trust, accessibility, or experience.
In this session, we look for new developments that support the use of remote sensing data for water management applications from local to global scales. We are looking for research to increase the quality of remote sensing products, such as higher resolution mapping of land use and/or agricultural practices or improved assessments of river discharge, lake and reservoir volumes, groundwater resources, drought monitoring/modelling and its impact on water-stressed vegetation, as well as on irrigation volumes monitoring and modeling. We are interested in quality assessment of remote sensing products through uncertainty analysis or evaluations using alternative sources of data. We also welcome contributions using a combination of different techniques (physically based models or artificial intelligence techniques) or a combination of different sources of data (remote sensing and in situ) and different imagery types (satellite, airborne, drone). Finally, we wish to attract presentations on developments of user-friendly platforms providing smooth access to remote sensing data for water applications.
We are particularly interested in applications of remote sensing to determine the human water interactions and the climate change impacts on the whole water cycle.
From the real-time integration of multi-parametric observations is expected the major contribution to the development of operational t-DASH systems suitable for supporting decision makers with continuously updated seismic hazard scenarios. A very preliminary step in this direction is the identification of those parameters (seismological, chemical, physical, biological, etc.) whose space-time dynamics and/or anomalous variability can be, to some extent, associated with the complex process of preparation of major earthquakes.
This session wants then to encourage studies devoted to demonstrate the added value of the introduction of specific, observations and/or data analysis methods within the t-DASH and StEF perspectives. Therefore, studies based on long-term data analyses, including different conditions of seismic activity, are particularly encouraged. Similarly welcome will be the presentation of infrastructures devoted to maintain and further develop our present observational capabilities of earthquake related phenomena also contributing in this way to build a global multi-parametric Earthquakes Observing System (EQuOS) to complement the existing GEOSS initiative.
To this aim this session is not addressed just to seismology and natural hazards scientists but also to geologist, atmospheric sciences and electromagnetism researchers, whose collaboration is particular important for fully understand mechanisms of earthquake preparation and their possible relation with other measurable quantities. For this reason, all contributions devoted to the description of genetic models of earthquake’s precursory phenomena are equally welcome.
Co-organized by EMRP1/ESSI4/GI5, co-sponsored by
JpGU and EMSEV
Natural hazards in the Earth system, such as earthquakes, tsunamis, landslides, volcanic eruptions, cyclones, and extreme weather, primarily brew and occur in the lithosphere and troposphere, which often happen unexpectedly and impact human daily life. Tracing the atmospheric and ionospheric disturbances due to the hazards benefits nowcasting their occurrences. On the other hand, solar activities can induce geomagnetic storms that accompany the magnetosphere-ionosphere coupling and atmospheric disturbances, which impact satellite operation, global high-precision positioning and navigation, and damage the electric supply system near the Earth’s surface. Impacts of the hazards are not limited to a specific geosphere but often impact multiple geospheres, subsequently affecting daily life. Therefore, there is an urgent need for instrumental arrays to monitor useful signals, novel methodologies to retrieve associated data, and numerical simulations to understand the interaction between the lithosphere (hydrosphere), atmosphere, and space (LAS).
In this session, we invite scientists interested in studying the interaction between the lithosphere (hydrosphere), atmosphere, and space but it is not limited to natural hazards alone. The interaction between the multiple geospheres can be excited by numerous potential sources, ranging from lithospheric activities in the Earth’s interior to solar activities in the space beyond the Earth system. Observations of parameters in one geosphere interacting with others, methodologies for detecting signals related to changes in the other geospheres, and the construction of numerical models spanning multiple geospheres are all welcome. The session aims to integrate scientists studying distinct fields to improve and enhance our understanding of the LAS interactions. Ultimately, this research aims to mitigate the loss of human life and property coming with a higher risk of being affected by natural hazards from the Earth and space.
The conservation, protection, and fruition of cultural heritage are closely related to the environmental setting and its variability. Historical objects, structures, and sites worldwide interact with a broad diversity of environments, on the surface (outdoors or indoors), underground, or underwater. As the characteristics of the Earth’s systems vary in space and time, also in view of climate change, so does the behavior of the materials shaping the cultural assets.
This session addresses the interaction between cultural heritage and the environment from the interdisciplinary perspective of geosciences, which represent a valuable support for investigating the properties and durability of the component materials (e.g., stones, ceramics, mortars, pigments, glasses, and metals); their vulnerability and changes in weathering dynamics; the influence of key environmental variables associated with climate, microclimate, and composition of air, waters, and soils; the impact of global warming, sea level rise, ocean acidification, and extreme weather events; the techniques and products to improve conservation practices; and the adaptation measures for heritage protection. This session welcomes contributions with an explicit and direct connection with environmental issues and questions. The possible research approaches include but are not limited to field and laboratory analysis and testing; damage assessment, observation, and simulation; modeling of decay and risk scenarios; strategies of monitoring and remote investigation; hardware/software design for collecting and processing environmental databases.
Geoscience underpins many aspects of the energy mix that fuels our planet and offers a range of solutions for reducing global greenhouse gas emissions as the world progresses towards net zero. The aim of this session is to explore and develop the contribution of geology, geophysics and petrophysics to the development of sustainable energy resources in the transition to low-carbon energy. The meeting will be a key forum for sharing geoscientific aspects of energy supply as earth scientists grapple with the subsurface challenges of remaking the world’s energy system, balancing competing demands in achieving a low carbon future.
Papers should show the use of any technology that was initially developed for use in conventional oil and gas industries, and show it being applied to either sustainable energy developments or to CCS, subsurface waste disposal or water resources.
Relevant topics include but are not limited to:
1. Exploration & appraisal of the subsurface aspects of geothermal, hydro and wind resources.
2. Appraisal & exploration of developments needed to provide raw materials for solar energy, electric car batteries and other rare earth elements needed for the modern digital society.
3. The use of reservoir modelling, 3D quantification and dynamic simulation for the prediction of subsurface energy storage.
4. The use of reservoir integrity cap-rock studies, reservoir modelling, 3D quantification and dynamic simulation for the development of CCS locations.
5. Quantitative evaluation of porosity, permeability, reactive transport & fracture transport at subsurface radioactive waste disposal sites.
6. The use of petrophysics, geophysics and geology in wind-farm design.
7. The petrophysics and geomechanical aspects of geothermal reservoir characterisation and exploitation including hydraulic fracturing.
Suitable contributions can address, but are not limited to:
A. Field testing and field experimental/explorational approaches aimed at characterizing an energy resource or analogue resources, key characteristics, and behaviours.
B. Laboratory experiments investigating the petrophysics, geophysics, geology as well as fluid-rock-interactions.
C. Risk evaluations and storage capacity estimates.
D. Numerical modelling and dynamic simulation of storage capacity, injectivity, fluid migration, trapping efficiency and pressure responses as well as simulations of geochemical reactions.
E. Hydraulic fracturing studies.
F. Geo-mechanical/well-bore integrity studies.
Pumped Hydropower Storage (PHS) already provides significant contributions to flexibly storing excess energy from intermittent renewable sources and the electric grid in the context of the energy transition to renewable sources.
The objective of the PHS sub-session is to determine the potentials for a further development and expansion of PHS applications by:
• Integration and hybridisation of intermittent renewable energy sources with PHS
• Potentials for reusing and modernising existing facilities in the water sector, existing reservoirs and sea water plants, open-pit and underground mines
• Concepts for rural and decentralised PHS implementation
• Environmental and social impacts of PHS resulting from its integration with new conceptual approaches
• Additional social and environmental benefits of PHS, including irrigation and drinking water provision, flood and drought risk management, etc.
• Economic drivers and market-dependent requirements for additional benefits and to increase technology export potentials
• Impacts of climate change on the availability of water and the technological mitigation of reservoir volume losses due to sedimentation and evaporation
• Legal considerations and accelerating approval processes
• Attraction of young professionals to maintain engineering knowledge
Energy system modelling (ESM) is a critical tool for understanding and optimizing the complex interactions within modern energy systems. EMS provides framework and modelling techniques to analyses the intricate interplay between various energy sources, system infrastructure, technologies, and policies.
The ESM sub-session aims to explore the significance of EMS in facilitating sustainable energy transitions by discussing:
- Key components and areas of energy system modelling, including integrating renewable sources like solar, wind, hydroelectric power, and geothermal combined with energy storage, small-scale energy generation technologies, and grid management systems.
- Various modelling techniques, from optimization and simulation to scenario analysis, including forecasting energy demand, evaluating infrastructure requirements, and assessing the effects of policy interventions.
- The role of stakeholders in the energy system modelling process, from modelling framing and data collection to influences on modelling analysis and selection of modelling parameters.
A worldwide transition towards “Net zero” requires electrification of diverse sectors over the coming decades. In addition to replacing existing fossil fuel-fired power plants with low-carbon energy resources, especially wind, solar and hydropower, additional capacity will need to be added. Renewable resources vary at a wide range of time scales, from minute-wise, seasonal, to interannual. Different variabilities and their spatio-temporal distribution can have their specific impacts on renewable energy systems, day-to-day operation, strategic planning and the design of “Net zero” pathways. In a changing climate, the availability patterns of renewable energy resources on various timescales are expected to change. Furthermore, considerable uncertainty underlies prediction of long-term changes in the spatio-temporal patterns of renewable resources. While energy demand often has daily and seasonal patterns that also depends on weather, this is also subject to variability and change. Given that the balance between demand and generation of electricity must always be maintained for grid reliability, there is a critical need for interdisciplinary dialogue between the climate science community and energy modeling research groups to explore renewable resource variability, in present and future scenarios, and resulting challenges for electricity grid management worldwide. It is important to identify critical challenges associated with balancing the demand and renewable generation, as well as identify methods and opportunities to address the challenges in the context of a wide range of uncertainties.
Studies may include (but are not limited to):
• Different ways to address the present seasonality of renewable resources and their expected changes in the future
• Uncertainties associated with future resource availability patterns
• Balancing of demand and supply of energy in present and future using different techniques such as bulk energy storage, maintaining an excess of wind-solar capacity, and demand-side management
• Economics and policy implications of the methods to maintain grid reliability
• Spatio-temporal complementarity between the availability patterns of different renewable energy resources
• Methods to maximize techno-economic synergies between different renewable resources and their hybridisation in the context of variability
• Data needs from climate science based modelling, to advance understanding of renewable energy sources
Geophysical methods have great potential for the characterization of subsurface properties and to inform geological reservoirs, hydrological and biogeochemical studies. In these contexts, the classically used geophysical tools only provide indirect information about the characteristics and heterogeneities of subsurface rocks. Petrophysical relationships hence have to be developed to provide links between physical properties (e.g. electrical conductivity, seismic velocity or attenuation) and the intrinsic parameters of interest (e.g. fluid content, hydraulic properties, pressure conditions). With the increase of distributed monitoring technique, geophysical methods are also deployed to study associated processes (e.g. flow, transport, biogeochemical reactions). This reinforce the need to establish petrophysical models with multidisciplinary approaches and diverse theoretical frameworks. While each physical property has its own intrinsic dependence on pore-scale interfacial, geometrical, and biogeochemical properties or on external conditions such as pressure or temperature, each associated geophysical method also has its own specific investigation depth and spatial resolution. Such complexity poses great challenges in combining theoretical developments with laboratory validations and scaling laboratorial observations to field practices. This session consequently invites contributions from various communities to share their models, their experiments, or their field tests and data in order to discuss multidisciplinary ways to advance the petrophysical relationship development and to improve our knowledge of complex processes in the subsurface. In the meantime, a range of low-carbon energy technologies incorporates reservoirs in the subsurface, whether as an energy resource (e.g., diverse types of geothermal energy) or as a storage medium (e.g., hydrogen storage, radioactive waste storage or CO2 sequestration). This session expects state-of-the-art laboratory experiments, focus on georeservoirs studies through geomechanics, geochemistry, petrophysics and materials science. It also welcomes contributions dealing with the development of novel apparatuses, newly developed sensors, or new experimental procedures to simulate geo-reservoir conditions and investigate rock and fluid properties at representative depths.
Treeline ecotones are transition zones between closed forest and climatically tree-less areas. Due to their climate sensitivity they are considered sentinels of global-change effects on terrestrial ecosystems. Vegetation patterns in treeline ecotones are constrained by multiple factors acting at different spatial and temporal scales. Climatic treeline positions are strongly influenced by global- and regional-scale climatic patterns, but other factors such as soil, meso-topography, and natural and anthropogenic disturbances dominate patterns at the landscape scale. Moreover, species competition/facilitation and micro-topographic heterogeneity are key factors for vegetation dynamics at finer scales. A current trend in vegetation dynamics both at latitudinal and altitudinal treelines is the accelerated encroachment of trees and shrubs, caused by interactions between climate and land-use changes. This encroachment can have far-reaching consequences for the biodiversity and functioning of mountain and subarctic ecosystems. Spatial vegetation patterns likely hold important information about the factors and processes (e.g. seed dispersal, safe-site characteristics, biotic interactions) that control this encroachment, yet few of treeline research deals with the spatial component of patterns and processes. For this reason, it is crucial to improve our understanding of spatial processes and the spatial signals of global change impacts in treeline ecotones and there is a need for a multiscale and multidisciplinary approach, to plan better adaptation strategies and monitor biodiversity trends in such sensitive ecosystems and to better link treeline metrics to ecological questions. Specifically, remote sensing can be combined with field data and modeling to capture the heterogeneity and variability of ecological conditions in treeline ecotones and couple observed spatial patterns to ecological processes. In this session, we invite contributions from all fields of research related to either the detection and description of treeline spatial and temporal patterns or the processes that may be relevant for these patterns.
Public information:
SPECIAL ISSUE ON BIOGEOSCIENCES JOURNAL: A special issue based on the session topic is scheduled and will be focused on observational and modeling studies along latitudinal and elevational treelines of the globe.
Environmental data from large measurement campaigns and automated measurement networks are increasingly available and provide relevant information of the Earth System. However, such data are usually only available as point observations and only represent a small part of the Earth´s surface. Upscaling strategies are hence needed to provide continuous and comprehensive information as a baseline to gain insights on large-scale spatio-temporal dynamics.
In the upscaling, machine learning algorithms that can account for complex and nonlinear relationships are increasingly used to link remote sensing datasets to reference measurements. The resulting models are then applied to provide spatially explicit predictions of the target variable, often even on a global scale.
Due to easy access to user-friendly software, model training and spatial prediction using machine learning algorithms is nowadays straightforward at first sight. However, considerable challenges remain: dealing with reference data that are not independent and identically distributed, accounting for spatial heterogeneity when scaling reference measurements to the grid cell scale, appropriately evaluating the resulting maps and quantifying their uncertainties, generating robust maps that do not suffer from extrapolation artifacts as well as the strategies for model interpretation and understanding. This session invites contributions on the methodology and application of large-scale mapping strategies in different disciplines, including vegetation characteristics such as foliar or canopy traits and photosynthesis, soil characteristics such as soil organic carbon, or atmospheric parameters such as pollutant concentration. Methodological contributions can focus on individual aspects of the upscaling approach, such as the design of measurement campaigns or networks to increase representativeness, novel algorithms or validation strategies as well as uncertainty assessment.
This session covers climate predictions from seasonal to multi-decadal timescales and their applications. Continuing to improve such predictions is of major importance to society. The session embraces advances in our understanding of the origins of seasonal to decadal predictability and of the limitations of such predictions, as well as advances in improving the forecast skill and reliability and making the most of this information by developing and evaluating new applications and climate services. The session welcomes contributions from dynamical as well as statistical predictions (including machine learning methods) and their combination. This includes predictions of climate phenomena, including extremes and natural hazards, from global to regional scales, and from seasonal to multi-decadal timescales ("seamless predictions"). The session also covers physical processes relevant to long-term predictability sources (e.g. ocean, cryosphere, or land) and predictions of large-scale atmospheric circulation anomalies associated to teleconnections as well as observational and emergent constraints on climate variability and predictability. Also relevant is the time-dependence of the predictive skill and windows of opportunity. Analysis of predictions in a multi-model framework and innovative ensemble-forecast initialization and generation strategies are another focus of the session. The session pays particular attention to innovative methods of quality assessment and verification of climate predictions, including extreme-weather frequencies, post-processing of climate hindcasts and forecasts, and quantification and interpretation of model uncertainty. We particularly invite contributions presenting the use of seasonal-to-decadal predictions for assessing risks from natural hazards, adaptation and further applications.
In this short course, we will introduce students and early-career researchers to the principles of Open Science, data, and software, as well as the benefits open practices can have for their own research careers, for science, and for society. Participants will have the opportunity to explore the practical impact of Open Science for their work. Participants will develop their digital presence, including using an ORCID to build a permanent profile of their work, and will make a plan to share their data, software, and publications as openly as possible. We will go over the open science outcomes and tools that advance research and collaboration and practice hands-on skills to advance participants’ careers through open science practices.
Participants in this short course will be able to define open science, discuss the benefits and challenges of open science, and identify practices that enable open science. Participants will develop their digital presence, including using an ORCID to build a permanent profile of their work, and will learn strategies for sharing research outputs, data, and software as openly as possible. This course is designed for students or other researchers new to open science; no previous experience with publishing research is required.
Co-organized by EOS4/ESSI6/GM13/NH12/PS8/SSP1, co-sponsored by
AGU
In the field of environmental science and big data, mastering data integration, Virtual Research Environments (VREs), web services, and open science practices is crucial. Environmental researchers, with their expertise, address complex natural and ecological challenges. Interdisciplinary collaboration extends beyond humans; scientists and developers collaborate to enhance machine-to-machine (M2M) interactions and enable data and service interoperation across diverse technologies, including web services, in the evolving landscape of data science and technology.
Our comprehensive course brings together environmental researchers, data developers, scientists, and engineers. Through hands-on learning, we aim to deepen your understanding of data integration, VREs, web services, and their pivotal role in environmental science.
Over the past decade, scientific research has seen a revolution thanks to distributed computing infrastructure and open data concepts. Researchers now access abundant cloud computing power. Attendees will learn to find datasets (in the EOSC Marketplace or similar platforms), access EGI cloud resources, and run scientific applications in the cloud for data analysis.
The course will also address the challenges of complex and time-consuming processes when customizing and running data workflows on the cloud using Jupyter notebooks, by teaching participants key technologies for notebook containerization, workflow composition, and cloud automation in a Jupyter notebook-based VRE. We will guide attendees to explore science cases in ecology and biodiversity virtual labs, making it a comprehensive and practical learning experience.
Please remember to bring your own laptop!
Course contributors:
EGI Foundation
University of Amsterdam and LifeWatch ERIC
Lund University and ICOS Carbon Portal
Public information:
In this course you'll gain skills to master data integration and key technologies for workflow composition and cloud automation. You’ll navigate Virtual Research Environments and embrace open science practices for environmental research.
Since Claude Shannon coined the term 'Information Entropy' in 1948, Information Theory has become a central language and framework for the information age. Across disciplines, it can be used for i) characterizing systems, ii) quantifying the information content in data and theory, iii) evaluating how well models can learn from data, and iv) measuring how well models do in prediction. Due to their generality, concepts and measures from Information Theory can be applied to both knowledge- and data-based modelling approaches, and combinations thereof, which makes them very useful in the context of Machine Learning and hybrid modeling.
In this short course, we will introduce the key concepts and measures of Information Theory (Information, Entropy, Conditional Entropy, Mutual Information, Cross Entropy and Kullback-Leibler divergence), with practical examples of how they have been applied in Earth Science, and give a brief introduction to available open-source software.
This course assumes no previous knowledge or experience with Information Theory and welcomes all who are intrigued to learn more about this powerful theory.
In recent years, machine learning (ML) algorithms have evolved at a very fast pace, revolutionizing, along the way, numerous sectors of modern society. ML has found countless applications in our daily lives, making it almost impossible to describe all of its uses. Notably, artificial neural networks (NNs) stand out as one of the most powerful and diverse classes of models. The NN-empowered tools assist in navigating our routes to the target destinations, providing personalized recommendations for entertainment, suggesting shopping preferences, classifying emails, translating text, and can even mimic human interactions in the form of chat bots. All of these applications are inspired by the same idea: using artificial intelligence can enhance our lives and boost efficiency when dealing with these tasks. The scientific community has seen a boom in machine learning studies, and many of the latest NN-based models outperform the traditional approaches by a very large margin. Therefore, the potential of integrating NN models into various scientific applications is boundless.
At the same time, NNs are usually criticized for being “black-box” models that are hard to interpret and understand, with an aura of mystery surrounding these algorithms. In this short course, we will delve into the foundations of neural networks, emphasizing approaches and best practices to model training, independent validation and testing, as well as model deployment. We will describe both the basic concepts and building blocks of the neural network architectures, and also touch upon the more advanced models. Our objective is to explain how neural network models can be understood in comprehensive but relatable terms for participants coming from a broad range of backgrounds.
Data assimilation (DA) is widely used in the study of the atmosphere, the ocean, the land surface, hydrological processes, etc. The powerful technique combines prior information from numerical model simulations with observations to provide a better estimate of the state of the system than either the data or the model alone. This short course will introduce participants to the basics of data assimilation, including the theory and its applications to various disciplines of geoscience. An interactive hands-on example of building a data assimilation system based on a simple numerical model will be given. This will prepare participants to build a data assimilation system for their own numerical models at a later stage after the course.
In summary, the short course introduces the following topics:
(1) DA theory, including basic concepts and selected methodologies.
(2) Examples of DA applications in various geoscience fields.
(3) Hands-on exercise in applying data assimilation to an example numerical model using open-source software.
This short course is aimed at people who are interested in data assimilation but do not necessarily have experience in data assimilation, in particular early career scientists (BSc, MSc, PhD students and postdocs) and people who are new to data assimilation.
Research software, encompassing source code files, algorithms, computational workflows, and executables, plays a pivotal role in various scientific disciplines. For example, computational models of the earth may aid decision-making by quantifying the outcomes of different scenarios, such as varying emission scenarios. How can we ensure the robustness and longevity of such research software? This short course teaches the concept of sustainable research software. Sustainable research software is easy to update and extend, meaning it will be easier to maintain and extend that software with new ideas and stay in sync with the most recent scientific findings. This maintainability should also be possible by researchers who have not initially developed the code, which will ultimately result in more reproducible science.
In this short course, we will delve into sustainable research software development principles and practices. The topics include:
- Properties and metrics of sustainable research software
- Writing clear, modular, reusable code that adheres to coding standards and best practices of sustainable research software (e.g., agile project management, documentation, unit testing, FAIR for research software).
- Using simple code quality metrics to develop high-quality code
- Documenting your code using platforms like Sphinx for Python
We will apply these principles to a case study of a reprogrammed version of the global WaterGAP Hydrological Model. We will showcase its current state in a GitHub environment along with example source code.
This course is intended for early-career researchers that create and use research models and software. Basic programming or software development experience is required. The course has limited seats available on a first-come-first-served basis.
Datacubes form an acknowledged cornerstone for analysis-ready data – the multi-dimensional paradigm is natural for humans and easier to handle than zillions of scenes, for both humans and programs. Today, datacubes are common in many places – powerful management and analytics tools exist, with both datacube servers and clients ranging from simple mapping over virtual globes and Web GIS to high-end analytics through python and R. This ecosystem is backed by established standards in OGC, ISO, and further bodies.
In this short course we introduce the concepts of datacubes, discuss different service API standards, and explore hands-on together how to access, extract, analyze, and reformat data from datacubes. Particular emphasis is on timeseries wrangling. The examples can be recapitulated and modified by participants with online access. Further, building up datacubes from a variety file sets is discussed. Ample room will be left for discussion, inviting participants to table their datacube-related problems.
After this course, participants
- Know the core datacube concepts, terminology, and standards
- Understand the advantage of the harmonization done in datacubes and its contribution to analysis-ready data
- Know sample datacube services and can work with them, with some self-chosen client
- Understand how datacubes can be built from file sets
Co-organized by ESSI6, co-sponsored by
IEEE GRSS and CODATA-Germany
In April 2023, EPOS, the European Plate Observing System launched the EPOS Data Portal
(https://www.ics-c.epos-eu.org/), which provides access to multidisciplinary data, data products, services and software from solid Earth science domain. Currently, ten thematic communities provide input to the EPOS Data Portal through services (APIs): Anthropogenic Hazards, Geological Information and Modelling, Geomagnetic Observations, GNSS Data and Products, Multi-Scale Laboratories, Near Fault Observatories, Satellite
Data, Seismology, Tsunami and Volcano Observations.
The EPOS Data Portal enables search and discovery of assets thanks to metadata and visualisation in map, table or graph views, including download of the assets, with the objective to enable multi-, inter- transdisciplinary research by following FAIR principles.
This short course will provide introduction to the EPOS ecosystem, demonstration of the EPOS Data Portal and hands-on training by following a scientific use case using the online portal. It is expected that participants have scientific background in one or more scientific domains listed above.
The training especially targets young researchers and all those who need to combine multi-, inter- and transdisciplinary data in their research. The use of the EPOS data Portal will simplify data search for Early Career Scientists and potentially help them in accelerating their career development.
Feedback from participants will be collected and used for further improvements of the Data Portal.
R is a free, open-source programming language popularly used for data science, statistical analysis, and visualization. Spatial data analysis has been strongly supported by the R community, that provides tools for data reading, writing and downloading, and for spatial processing, visualizing and modelling. The R-Spatial package ecosystem relies on common libraries for geospatial analysis such as GDAL, GEOS, and PROJ. In this workshop, we will introduce participants to spatial data analysis in R. For this, there will be demonstrations of key R packages like {sf}, {stars}, {terra} for vector and raster data processing. We will also focus on spatial data visualization using the {tmap} package. We will focus on datasets strongly used by the Geoscience community, including satellite imagery.
Public information:
Schedule 19:00-19:10: Introduction to R-Spatial 19:10-19:30: Vector data 19:30-19:50: Raster data 19:50-20:00: Q&A
Julia offers a fresh approach to scientific computing, high-performance computing and data crunching. Recently designed from the ground up Julia avoids many of the weak points of older, widely used programming languages in science such as Python, Matlab, and R. Julia is an interactive scripting language, yet it executes with similar speed as C(++) and Fortran. Its qualities make it an appealing tool for the geo-scientist.
Julia has been gaining traction in the geosciences over the last years in applications ranging from high performance simulations, data processing, geostatistics, machine learning, differentiable programming to general modelling. The Julia package ecosystem necessary for geosciences has substantially matured, which makes it readily usable for research.
This course provides a hands-on introduction to get you started with Julia. We aim to give a broad overview of Julia and its ecosystem as well as going through hands-on coding exercises based around concrete earth science applications. In particular you will:
- learn about the Julia language and what sets it apart from others
- write simple Julia code to get you started with scientific programming (arrays, loops, input/output, etc.)
- hand-on exercise on installing Julia packages and management of package environments (similar, e.g., to virtual-environments in Python)
- brief overview of geoscience related packages
- code a small project, such as a simple 1D model or a data processing pipeline, with a particular focus to achieve performance on par with C or Fortran.
We request participants to install Julia on their laptops to allow a smooth start into the course. We will provide detailed documentation for this installation. We look forward to having you on board and aim this workshop to be a fresh and interactive outlook on modern scientific computing. We will make sure to foster exchange of ideas and knowledge and to provide an as inclusive as possible event.
The Python community is steadily growing in the field of Earth and Space Sciences, as many Python tools have evolved to more efficient and user-friendly status for handling geospatial data. In this short introductory course, we will help participants with a working knowledge of Python to familiarize themselves with the world of geospatial raster and vector data. We will introduce a set of tools from the Python ecosystem and show how these can be used to carry out practical geospatial data analysis tasks. We will use satellite images and public geo-datasets as examples, and demonstrate how they can be opened, explored, manipulated, combined, and visualized using Python. The tutorial will be based on the lesson “Introduction to Geospatial Raster and Vector data with Python” [1], which is part of the Incubator program [2] of The Carpentries [3].
We encourage participants to join with a laptop and code along with the instructors. Researchers and staff interested in teaching the lesson curriculum [1] at their own institutions are also very welcome to join the demo.
Inferring climatic and tectonic processes from digital elevation models (DEMs) largely relies on the assumption that landscapes are in a steady state. However, transient landscapes, i.e., landscapes that undergo an adjustment to changing boundary conditions, contain information on geomorphic processes that - in combination with geomorphic transport laws - can be interrogated with digital terrain analysis and (spatial) statistics. This short course enables participants to learn methods and techniques implemented in TopoToolbox (http://topotoolbox.wordpress.com) to objectively infer rates of fluvial incision, knickpoint migration and divide migration in landscapes that experienced perturbations such as sudden changes in baselevel (e.g. by fault movements or sea level changes). Thereby, emphasis will we placed on parameter estimation, uncertainty propagation and quantification. Basic programming skills in MATLAB and experience with TopoToolbox are advantageous, but not strictly required.
Python is one of the fastest growing programming languages and has moved to the forefront in the earth system sciences (ESS), due to its usability, the applicability to a range of different data sources and, last but not least, the development of a considerable number ESS-friendly and ESS-specific packages.
This interactive Python course is aimed at ESS researchers who are interested in adding a new programming language to their repertoire. Except for some understanding of fundamental programming concepts (e.g. loops, conditions, functions etc.), this course presumes no previous knowledge of and experience in Python programming.
The goal of this course is to give the participants an introduction to the Python fundamentals and an overview of a selection of the most widely-used packages in ESS. The applicability of those packages ranges from (simple to advanced) number crunching (e.g. Numpy), to data analysis (e.g. Xarray, Pandas) to data visualization (e.g. Matplotlib).
The course will be grouped into different sections, based on topics discussed, packages introduced and field of application. Furthermore, each section will have an introduction to the main concepts e.g. fundamentals of a specific package and an interactive problem-set part.
This course welcomes active participation in terms of both on-site/virtual discussion and coding. To achieve this goal, the i) course curriculum and material will be provided in the form of Jupyter Notebooks ii) where the participants will have the opportunity to code up the iii) solutions to multiple problem sets and iv) have a pre-written working solution readily available. In these interactive sections of the course, participants are invited to try out the newly acquired skills and code up potentially different working solutions.
We very much encourage everyone who is interested in career development, data analysis and learning a new programming to join our course.
Geologists collect data from samples with many different techniques, such as optical and electron microscopy, and in many different forms, such as images and spot analysis. This makes the storage, processing and sharing of these datasets in a unified way extremely challenging. In this short course, we want to demonstrate with examples from structural geology, petrology and paleontology, how a GIS (and specifically QGIS) can be used to organize and analyze your data. This will include a very basic introduction into the different data formats and then focus on real world examples, to give a glimpse into the vast capabilities of QGIS on the microscale. We will also have plenty of time for the discussion of other software solutions and give advice on your data sets.
Public information:
16:15 Why we need to organize our microscale data and how QGIS comes into play.
16:25 Showcase 1: An example from petrology
16:35 Showcase 2: An example from palaeontology
16:45 Questions and discussion
16:55 Short break
17:00 First steps in managing your microscale data in QGIS (tutorial)
In the past year, two rapid simulation tools for natural hazards were developed. Fastflood.org features a rapid simulation method for rainfall-runoff, routing and hydraulic modelling, averaging over 1500x faster than full simulation while achieving over 97 percent accuracy in simulated flooded areas. Fastrocks.org, a new addition to be published early 2024, provides a soil depth, slope stability and mass movement simulation tool with over 500 times speed increase over full debris flow models. Both of these tools are available as open, free, web-based simulation platforms, and are linked with global and satellite-based datasets to enable rapid assessment and interactive scenario-exploration. In this session we will organize a hands-on workshop with these tools. Using the automated data input tools, you can start exploring the workings of the model and the behaviour of the hazards in your own area. Best practices for improving your initial model using custom data or the built-in automated calibration tools will be explained. The limitations and opportunities of these simulation platforms will be explored to several study examples that can be simulated interactively during the workshop. In addition, the underlying technologies will be presented, both the numerical algorithms used to speed up the simulations, as well as the web-technologies used to host the platforms. Due to the usage of web-assembly, simulations run locally, and all user computations and data remain fully on the users device. Finally, the latest validation research will be highlighted.
Public information:
To everybody joining this session, we will be working with both www.fastflood.org, rapid flood simulation tool www.fastslide.org, Rapid landslide modelling tool
NOTE: The name has changed, the landslide is available at www.fastslide.org
Database documentation and sharing is a crucial part of the scientific process, and more scientists are choosing to share their data on centralised data repositories. These repositories have the advantage of guaranteeing immutability (i.e., the data cannot change), which is not so amenable to developing living databases (e.g., in continuous citizen science initiatives). At the same time, citizen science initiatives are becoming more and more popular in various fields of science, from natural hazards to hydrology, ecology and agronomy.
In this context, distributed databases offer an innovative approach to both data sharing and evolution. These systems have the distinct advantage of becoming more resilient and available as more users access the same data, and as distributed systems, contrarily to decentralised ones, do not use blockchain technology, they are orders of magnitude more efficient in data storage as well as completely free to use. Distributed databases can also mirror exising data, so that scientists can keep working in their preferred Excel, OpenOffice, or other software while automatically syncing database changes to the distributed web in real time.
This workshop will present the general concepts behind distributed, peer-to-peer systems. Attendees will then be guided through an interactive activity on Constellation, a new scientific software for distributed databases, learning how to both create their own databases as well as access and use others' data from the network. Potential applications include citizen science projects for hydrological data collection, invasive species monitoring, or community participation in managing natural hazards such as floods.
The workshop is organised according to the following schedule:
* Introduction to distributed databases and peer-to-peer systems (Julien Malard-Adam)
* Experiences in data management challenges in large participatory science projects in the Andes (Wouter Buytaert)
* Hands-on participatory tutorial with distributed data and Constellation software (Julien Malard-Adam; Joel Harms)
Visualisation of scientific data is an integral part of scientific understanding and communication. Scientists have to make decisions about the most effective way to communicate their results everyday. How do we best visualise the data to understand it ourselves? How do we best visualise our results to communicate with others? Common pitfalls can be overcrowding, overcomplicated or suboptimal plot types or inaccessible colour schemes. Scientists may also get overwhelmed by the graphics requirements of different publishers, for presentations, posters, etc. This short course is designed to help scientists improve their data visualisation skills in such a way that the research outputs would be more accessible within their own scientific community and reach a wider audience.
Topics discussed include:
- Golden rules of DataViz
- Choosing the most appropriate plot type and designing a good DataViz
- Graphical elements, fonts & layout
- Colour schemes, accessibility & inclusiveness – which ones to use or not to use
- Creativity vs simplicity – finding the right balance
- Figures for scientific journals: graphical requirements, rights & permissions
- Tools for effective data visualisation: DataViz with R and ggplot2
This course is co-organized by the Young Hydrologic Society (YHS), enabling networking and skill enhancement of early career researchers worldwide. Our goal is to help you make your figures more accessible to a wider audience, informative and beautiful. If you feel your graphs could be improved, we welcome you to join this short course.
Satellite imagery acquired by the Sentinel satellites can now be openly accessed via a new interface that was launched for easy searching, navigating, visualizing and download of these datasets. The Copernicus Browser provides the tools to quickly visualise satellite imagery, whether individual acquisitions, comparing different dates or even generating timelapses. This course will explain the basics of satellite Earth Observation, introduce the sensors and satellites available and their various applications. In addition to the default visualization options, custom scripts will be introduced for calculating spectral indices and derived products. Advanced image effects and tools and download options will be shown, together with tools for sharing imagery online without downloading. The Copernicus Browser interface for downloading individual images will also be demonstrated, with powerful search and filtering options for finding images of interest.
Additionally, much of the functionality of the Copernicus Browser is also available in a QGIS plugin. In addition to accessing the imagery from the ecosystem, users can also create custom configurations and layers. This plugin will also be demonstrated with practical case studies.
This short course will introduce the functionality of the Copernicus Browser and the Copernicus Data Space Ecosystem QGIS plugin, starting from beginner level and progressing towards the more advanced tools. Participants can follow on their own computers, but the course will be designed also for those without on-site computer access. After the course, participants will be able to search and discover satellite imagery of sites and events of interest, identify algorithms for studying various properties of the imagery, visualize the results, and download or share the resulting products. No prior knowledge of remote sensing or image processing is required.
Please use the buttons below to download the supplementary material or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
Please decide on your access
Please use the buttons below to download the supplementary material or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.