Most geoscientific research takes place in a certain geographic place, and therefore we almost always need to create a study area map. This short course will be a practical hands-on session for making a beautifully stylized study area map from scratch using open-source software QGIS. We will show where to download global open data and create a map with all the necessary map elements (title, legend, scale bar, north arrow) and an inset map showing the location of the main map in the context of a larger area. In addition, we will present the state-of-the-art "do's and don'ts" of cartographic design based on cases from the published research papers.
Public information:
To actively participate in the session, you would need to bring your laptop and install QGIS which can be freely downloaded from here: https://qgis.org/en/site/forusers/download.html
The recommended version is QGIS Standalone Installer Version 3.22. Version 3.24 is not recommended because it is not stable.
If you wish then you can bring your own study area border as shp or gpkg file.
In many scientific disciplines, accurate, intuitive, and aesthetically pleasing display of geospatial information is a critical tool. PyGMT (https://www.pygmt.org) - a Python interface to the Generic Mapping Tools (GMT) - is a mapping toolbox designed to produce publication-quality figures and maps for insertion into posters, reports, and manuscripts. This short course is geared towards geoscientists interested in creating beautiful maps using Python. Only basic Python knowledge is needed, and a background in cartography is not required to use PyGMT effectively! By the end of this tutorial, students will be able to:
- Craft basic maps with geographic map frames using different projections
- Add context to their figures, such as legends, colorbars, and inset overview maps
- Use PyGMT to process PyData data structures (xarray/pandas/geopandas) and plot them on maps
- Understand how PyGMT can be used for various applications in the Earth sciences and beyond!
The 1.5 hour long short course will be based on content adapted from https://github.com/GenericMappingTools/2021-unavco-course and https://github.com/GenericMappingTools/foss4g2019oceania. Each of the 30 minute sessions will involve a quick (~10 minute) walkthrough by the speaker, followed by a more hands-on session in breakout rooms where tutorial participants work on the topic (using interactive Jupyter notebooks) in a guided environment with one of four instructors on hand to answer questions.
We expressly welcome students and geoscientists working on any geo related fields (e.g. Earth Observation, Geophysical, Marine, Magnetic, Gravity, Planetary, etc) to join. Come and find out what PyGMT can do to level up your geoprocessing workflow!
Public information:
Course materials are available as a Jupyter Book on https://www.generic-mapping-tools.org/egu22pygmt. GitHub repository is at https://github.com/GenericMappingTools/egu22pygmt
Geotagged photographs provide unique insights into the plurality of geographic tasks - from land use/land cover classifications to biodiversity, hazardous events, and people's well-being monitoring. Millions of passively and actively crowdsourced geotagged photographs are available online in open access via social media platforms (Twitter, Flickr, VK.com) and citizen science initiatives. However, handling big photo datasets requires advanced data science skills. This short course presents the democratised digital tools for image analytics and classification: TensorFlow deep learning classification model, implemented in Microsoft's Lobe software, and image analytics tools in Orange data mining software. Participants will learn the basics of image analytics, the difference between supervised and unsupervised classification, quality assessment and optimisation techniques for deep learning image classification models.
Public information:
In this session, we will solve the classification task for the dataset of geotagged Flickr photographs using hierarchical clustering and a custom deep learning model. No coding skills are required. Participants are expected to install Orange https://orangedatamining.com/download/#windows and Lobe https://www.lobe.ai/ to their own laptops in advance. In Orange go to Options - Add-ons... and install 'Image Analytics' add-on before workshop.
During the recent years, it has become more and more obvious that soil structure plays a fundamental role in regulating processes in soils. As soil structures are hierarchical, complex and highly variable, studies involving soil structures require a relatively large number of replicate samples. Three-dimensional X-ray imaging provides an excellent tool to map out soil structure, but image analyses are still time intensive and require experience. This limits the number of X-ray images, and thus replicate samples that can be analyzed within reasonable time scales. SoilJ is an open-source and free plugin for the open-source image processing software ImageJ. It is tailor-made for the analyses X-ray images of soil and aims at automatizing the necessary image processing and analyses steps. This course gives a short introduction into X-ray image processing and analyses in general and specifically with SoilJ, provides an overview about SoilJ functionalities and offers guidance for researchers interested in participating in developing their own plugins.
We have developed an open source software package in python for ground-based GNSS reflections – gnssrefl (https://github.com/kristinemlarson/gnssrefl). This new software supports geoscientists wishing to measure in situ snow accumulation, permafrost melt, firn density, tides, and lake/river levels. We have developed videos (hosted on youtube) to help new users understand both the basic concepts of GNSS reflections and how to install and run the gnssrefl code. More than a dozen use cases are available online; Jupyter Notebooks have been developed as well. We envision the EGU tutorial session to be hands-on and interactive, with a focus on demonstrating the gnssrefl software and online tools (https://gnss-reflections.org), examining and discussing environmental results derived from GNSS data taken from public archives, and analyzing new datasets suggested by the students.
Public information:
We have developed an open source code in python (gnssrefl) that allows users to measure either water levels or snow accumulation using GNSS data. This session will be devoted to helping users understand how to run and install the code. Please see the github (https://github.com/kristinemlarson/gnssrefl) repository for some tips on how to install the gnssrefl package on your local machine. We currently support the python code on linux and macs, with docker images for these and PCs. We also have links to jupyter notebooks. There is a complementary web app at https://gnss-reflections.org.
Database documentation and sharing is a crucial part of the scientific process, and more and more scientists are choosing to share their data on centralised data repositories. These repositories have the advantage of guaranteeing immutability (i.e., the data cannot change), which is advantageous in the context of preserving published data "as-is" but is not so amenable to developing living databases (e.g., in continuous citizen science initiatives).
Distributed databases offer an innovative approach to both data sharing and evolution. Since these databases exist entirely on peer-to-peer systems, the distinction between "server" and "client" is blurred and the data residing on each individual device that is accessing it (whether personal computer, mobile phone or server) are equally valid sources of truth that can share data with new peers. These systems therefore have the distinct advantage of becoming more resilient and available as more users access the same data, with significant potential to decrease server costs and entry barriers for citizen science initiatives. At the same time, behind-the-scenes cryptography automatically ensures that the data is valid and cannot be tampered with by intermediary peers.
Distributed databases can also be configured to mirror exising databases in other formats, so that scientists can keep working in their preferred Excel, OpenOffice, SQL or other software while automatically syncing database changes to the decentralised web in real time.
This workshop will present the general concepts behind distributed, peer-to-peer systems with a particular emphasis on the context of scientific data sharing. Attendees will then be guided through an interactive activity on Constellation, a new scientific software for distributed databases, learning how to both create their own databases as well as access and use others' data from the network.
MTEX (https://mtex-toolbox.github.io/) has become a standard tool for the quantification of crystallographic textures and microstructural-derived physical properties in geological materials. From the quantification of crystallographic preferred orientations (CPOs), intracrystalline deformation, grain sizes and shapes in geological materials, to determination of CPO-derived physical properties (e.g., elastic, piezoelectric), all is possible with MTEX.
In the first part of the short course, we will introduce basic concepts on how MTEX works. In the second part, we will run demonstrations of some application in geological materials.
The following topics will be covered:
1) Importing EBSD data, pre-processing;
2) Checking orientations, ODF quantification, plot of pole figures;
3) Grain segmentation, calculation of grain sizes and shapes;
4) Intracrystalline deformation analysis, subgrains, new grains;
5) Tensors and CPO-derived seismic properties
Demonstrations will be made using the MTEX toolbax in Matlab. Note, however, that familiarity with the toolbox is not required. - This is a short course, not a workshop.
Please email us if you want to participate (ruediger.kilian@geo.uni-halle.de or luiz.morales@scopem.ethz.ch)
The main objective this workshop is introducing the attendees to practical use cases for Discrete Global Grid Systems (DGGS) for spatial data aggregation and analysis. After a short background on current real-world software implementations with exemplary use cases, we walk through an interactive exploration of solving traditional GIS and spatial analysis challenges with a hexagonal DGGS on the example of Uber H3. H3 is a software library that, besides many other programming languages, can be used in Python. We demonstrate grid generation, data indexing and sampling in a unified Jupyter notebook. We apply spatial analysis methods that exploit the specific grid properties and discuss eventually DGGS for datacube applications.The main objective of this workshop is to introduce the attendees to practical use cases for Discrete Global Grid Systems (DGGS) for spatial data aggregation and analysis. While still in gridded form, DGGS have several topological advantages over classic raster, e.g. equal-area properties and reliable unique cell indexing. After a short background on current real-world software implementations with exemplary use cases, we walk through an interactive exploration of solving traditional GIS and spatial analysis challenges with a hexagonal DGGS on the example of Uber H3. H3 is a software library that, besides many other programming languages, can be used in Python. We demonstrate grid generation, data indexing and sampling in a unified Jupyter notebook. We apply spatial analysis methods that exploit the specific grid properties and discuss eventually DGGS for datacube applications.
Public information:
Spatial Data Analysis with H3 Please go to the following GitHub repository for the course materials: https://github.com/allixender/dggs_t1
This is a prepared MyBinder online notebook environment. We will use the to sections:
This short course is an opportunity to learn about Copernicus data for Atmospheric Composition and to get examples on how to develop your own workflows based on atmospheric composition applications related to air quality or wildfires. Data from Copernicus, the European Commission’s Earth Observation programme, contain satellite- and model-based data and provide vital information on key atmospheric constituents at different spatial and temporal scales.
The session will be hands-on and leading experts in Earth Observation, Earth System Modelling and atmospheric composition will introduce you to the data and show you how you can discover, process and visualise them. You will make use of a series of freely available tools specifically developed for atmospheric composition applications. The course will be based on educational Jupyter Notebook modules, which allow for an easy and intuitive way to learn Python and Earth System data processing. No experience is necessary as various exercises will be provided for a wide range of skill levels and applications. We recommend bringing your laptop along.
In the recent years, several new open-source online and desktop applications to visualize and analyze your paleomagnetic and paleointensity data have been presented (Remasoft, paleomagnetism.org, paleointensity.org, PuffinPlot, DAIE, etc.). Among them, the PmagPy package includes a set of tools to visualize data, and conduct statistical tests that have associated visualizations (written in Python). PmagPy is accompanied by a large and growing set of Jupyter notebooks which can be used in an online jupyterhub website avoiding installation of software. There is also an openly available course on Python for Earth Scientists, illustrating the use various useful packages and creation of data product maps and figures. The PmagPy notebooks include a guide through the process of preparing the data in the laboratory and the final upload to the MagIC database. We invite you to this Short Course which is aimed for everyone, from a first-time user to an experienced paleomagnetist.
R is an open-source, versatile programming language that is suitable for multi-scale analyses from just a few observations to big data and high-performance computing. It has a growing, enthusiastic user-base (including hydrologists) that is responsible for a continuous stream of ever more efficient and useful packages and workflows.
Running for its fifth year, this EGU short course, co-organised by the Young Hydrologic Society (younghs.com), will introduce and showcase a selection of both core and recently developed R packages that can be applied to data analyses in hydrology, as well as other scientific disciplines.
The course will be delivered by hydrologists with wide experience in subjects including: hydrological modelling (including flood and drought analysis), forecasting, statistics, and eco-hydrology.
Topics covered in this years’ course include:
• Topic tbd (Claudia Brauer)
• Identification of hydrologic events (Conrad Wasko + Danlu Guo)
• Flood forecast verification in R (Andrea Ficchi)
• The (mis)use of colours in scientific visualizations (Michael Stoelzle)
• Machine learning for spatio-temporal modelling (Razi Sheikholeslami)
This course contributes new topics to those delivered in previous years, building upon the openly accessible Github repository for hydrologists using R in their work (https://github.com/hydrosoc).
Public information:
Detailed programme:
8.30-8.50 Identification of hydrologic events (Conrad Wasko + Danlu Guo)
8.50-9.10 Flood forecast verification in R (Andrea Ficchi)
9.10-9.30 Machine learning for spatio-temporal modelling (Razi Sheikholeslami)
9.30-9.50 The (mis)use of colours in scientific visualizations (Michael Stoelzle)
This short course will prepare the engineer,water resource professionals and scientist to use the HEC-RAS computer program in real world situations.HEC-RAS is user friendly, computationally efficient, and runs within, and fully supports, the Microsoft Windows environment. It uses the latest graphical user interface (GUI) technology for data entry, graphics, and display of program results. Complete context-sensitive help screens are available for every program feature and option. Software includes the following functions: file management, data entry and editing, hydraulic analyses, tabulation and graphical displays of input and output data, reporting facilities, and on-line help.The participants of the course will learn how to compute water surface elevation for different river discharges for steady and unsteady flow conditions.The Geo scientist who knows River water level can compute the river discharge of that time may be thousand of years before. The unsteady flow analysis will help river engineers and participants of nay discipline to know how the flood waters may have passed a specific section. He can run the calibrated and validated model to predict water surface elevation for different scenarios of flow resulted from a river basin. This course will help to know water level of a stream passing through a small campus to large rivers with very high discharges.
This Short Course is aimed at researchers in climate-related domains, who have an interest in working with climate data. We will introduce the ESMValTool, a Python project developed to facilitate the analysis of climate data through so-called recipes. An ESMValTool recipe specifies which input data will be used, which preprocessor functions will be applied, and which analytics should be computed. As such, it enables readable and reproducible workflows. The tool takes care of finding, downloading, and preparing data for analysis. It includes a suite of preprocessing functions for commonly used operations on the input data, such as regridding or computation of various statistics, as well as a large collection of established analytics.
In this course, we will run some of the available example recipes using ESMValTool’s convenient Jupyter notebook interface. You will learn how to customize the examples, in order to get started with implementing your own analysis. A number of core developers of ESMValTool will be present to answer any and all questions you may have.
The ESMValTool has been designed to analyze the data produced by Earth System Models participating in the Coupled Model Intercomparison Project (CMIP), but it also supports commonly used observational and re-analysis climate datasets, such as ERA5. Version 2 of the ESMValTool has been specifically developed to target the increased data volume and complexity of CMIP Phase 6 (CMIP6) datasets. ESMValTool comes with a large number of well-established analytics, such as those in Chapter 9 of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) (Flato et al., 2013) and has been extensively used in preparing the figures of the Sixth Assessment Report (AR6). In this way, the evaluation of model results can be made more efficient, thereby enabling scientists to focus on developing more innovative methods of analysis rather than constantly having to "reinvent the wheel".
Public information:
Course material will be made available at https://github.com/ESMValGroup/EGU22-short-course
Soil moisture is a key variable needed for application in climatology and hydrology. Knowledge about soil moisture is important to understand the ecosystems feedback to global climate change. Remote sensing can assist with deriving spatial soil moisture data on a regular basis. Particularly, optical remote sensing can be used to estimate soil moisture with unprecedented satellite archives (>30 years of Landsat) at high spatial resolution (30 m) globally.
Optical Trapezoid Model (OPTRAM) has shown high accuracy in soil moisture estimation over mineral and organic soils. OPTRAM utilises NIR and SWIR spectral regions that are sensitive to the water content in soil and vegetation. In wetlands, OPTRAM can also be used to derive information about groundwater position. Deriving soil moisture information with OPTRAM is a complex task that requires skills in processing remote sensing images, coding and analysing spatiotemporal data.
In this workshop, we will present the workflow needed for OPTRAM calculation in open-source R software. We will guide the participants on several key points:
- sources to derive optical remote sensing data;
- spatial data wrangling;
- estimation of OPTRAM;
- plotting of spatial data;
- interpretation of results.
Public information:
We have uploaded the code and data to GitHub (you can find them following this link: https://github.com/PEATSPEC/EGU22_WTD_workshop). For the course, there is no need to install R software or download data; you will run all the code online via Binder. If you want to try Binder in advance - go to the GitHub link above and scroll down the page; at the bottom, you will see the button "launch binder" - click it. The first launch of the Binder will take up to 10 min (the second time it will be faster). Please, take your laptops tomorrow, so we can run the code together.
QuakeMigrate is a new, open-source software package for automatic earthquake detection and location (https://github.com/QuakeMigrate/QuakeMigrate). Our software provides a means for seismologists to extract highly complete catalogues of microseismicity from continuous seismic data, whether their network is installed at a volcano, plate-boundary fault zone, on an ice shelf, or even on another planet. Rather than traditional pick-based techniques, it uses a migration-based approach to combine the recordings from stations across a seismic network, promising increased robustness to noise, more accurate hypocentre locations, and improved detection capability. Cloud-hosted Jupyter Notebooks and tutorials (https://mybinder.org/v2/gh/QuakeMigrate/QuakeMigrate/master) provide an overview of the philosophy and capabilities of our algorithm, and in this session we intend to provide a more hands-on introduction, with a focus on providing a general understanding of the considerations when applying a waveform-based algorithm to detect and locate seismicity.
QuakeMigrate has been constructed with a modular architecture, to make it flexible to use in different settings. We will demonstrate its use in detecting and locating basal icequakes at the Rutford Ice Stream, Antarctica, volcano-tectonic seismicity during the 2014 Bárðarbunga-Holuhraun and 2021 Reykjanes/Fagradalsfjall dike intrusions, and aftershocks from a M5 tectonic earthquake in northern Borneo, which was recorded on a sparse regional seismic network. In each case we will discuss the reasoning behind parameter selections, and the key factors in maximising detection sensitivity while minimising computational cost. We will end the session by exploring sample datasets provided by attendees, with interactive involvement as we tune parameters and use the comprehensive array of automatically generated plots to take a preliminary look at unseen data.
Changes in temperature in landslide bodies can be the result of external forcing (climatic or geothermal) as well as the consequence of frictional heat dissipation. Understanding and quantifying the mechanical response of geomaterials under thermal forcing can be crucial for predicting the initiation and fate of landslides, and the associated risk. Depending on the scale of interest, different modelling strategies have been developed, spanning from physically-based fully-coupled models accounting for micro-scale behaviours to large-scale geostatistical approaches. This short course aims to offer an overview of these modelling strategies with particular attention to state-of-the-art advances. The session is organized in cooperation with NhET (Natural hazard Early career scientists Team).
Public information:
We will give an overview of selected methods to account for temperature in landslide modelling focusing on:
Please use the buttons below to download the presentation materials or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
Please decide on your access
Please use the buttons below to download the presentation materials or to visit the external website where the presentation is linked. Regarding the external link, please note that Copernicus Meetings cannot accept any liability for the content and the website you will visit.
You are going to open an external link to the asset as indicated by the session. Copernicus Meetings cannot accept any liability for the content and the website you will visit.