Good scientific practice requires research results to be reproducible, experiments to be repeatable and methods to be reusable. This is a particular challenge for hydrological research, as scientific insights are often drawn from analysis of heterogeneous data sets comprising many different sources and based on a large variety of numerical models. The available data sets are becoming more complex and constantly superseded by new, improved releases. Similarly, new models and computational tools keep emerging and many are available in different versions and programming languages, with a large variability in the quality of the documentation. Moreover, how data and models are linked together towards scientific output is very rarely documented in a reproducible way. As a result, very few published results in hydrology are reproducible for the general reader.
A debate on good scientific practice is underway, while technological developments accelerate progress towards open and reproducible science. This session aims to advance this debate on open science, collect innovative ways of engaging in open science and showcase examples. It will include new scientific insights enabled by open science and new (combinations of) open science approaches with a documented potential to make hydrological research more open, accessible, reproducible and reusable.

This session should advance the discussion on open and reproducible science, highlight its advantages and also provide the means to bring this into practice. We strongly believe we should focus on the entire scientific process, instead of the results alone, obtained in a currently still rather fragmented way.

This session is organized in line with other Open Science efforts, such as FAIR Your Science.

Co-organized by HS1.2
Convener: Remko C. Nijzink | Co-conveners: Niels Drost, Francesca Pianosi, Stan Schymanski
| Attendance Mon, 04 May, 16:15–18:00 (CEST)

Files for download

Session materials Download all presentations (60MB)

Chat time: Monday, 4 May 2020, 16:15–18:00

D3913 |
Thorsten Wagener

Humanity has always been uncomfortable with knowledge gaps. When John Cabot left Bristol harbour in 1497 to find a new route to Asia, he was trying to fill one of those knowledge gaps. World maps available to him at the time seemingly described the world in great detail. However, when inspecting such maps more closely, one could see that much of this information were just drawings of lions and other monsters, reflecting areas that were actually unexplored. It is claimed that ancient mapmakers demarcated such unknown areas with the phrase HIC SUNT LEONES, "here be lions", suggesting that exploring such areas was dangerous and undesirable. But, less than a hundred years later, such maps had changed. They now revealed large areas of white space to reflect a lack of knowledge, thus inviting exploration to discover what was beyond the edge of current knowledge. Acknowledging the unknown became a scientific goal in itself.

Hydrology is rapidly developing into a global science where both mechanistic and data-based models assimilate global datasets to predict hydrologic behaviour across continental and even global domains. Model outputs showing global maps of hydrologic variables like streamflow, soil moisture or groundwater recharge have become increasingly common. However, such maps rarely contain information about where model predictions are made with more or less confidence. Where are models producing trustworthy information and where are we showing (hydrologic) lions? What are the reasons for variability in confidence that should be considered? How can we overcome these reasons? I will explore these questions with different examples drawn from large-scale hydrologic modelling.

How to cite: Wagener, T.: On doing Hydrology with Lions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9924, https://doi.org/10.5194/egusphere-egu2020-9924, 2020.

D3914 |
Ivan Vorobevskii and Rico Kronenberg

‘Just drop a catchment and receive reasonable model output’ – is a pretty bold motto and idea of a new open-source R-package ‘Global BROOK90’.

The package is build-up on a basement of lumped physical hydrological model BROOK90 (Federer, C.A. 2002) which focuses on the detailed description of the vertical water movement and evapotranspiration.

Our primary goal is to broaden the BROOK90-user’s community by binding an open-source model with open-source global forcing datasets in order to get a rough estimations of the water balance components. Therefore, the presented framework enables the user to apply the model for any possible location by automatic download, extraction and processing of meteorological (Copernicus ERA-51 hourly reanalysis, from 1979 to 2019), topographical (Amazon Web Services2), soil (SoilGrids3) and land cover (Copernicus Global Land Service: Land Cover4) data.

The package framework routine consists of following steps. At first, all necessary data to run the model is downloaded according to the georeferenced shape file of the catchment of interest. In a next step, a regular grid of 100x100 m is setup to construct hydrotops in the catchment. Afterwards, BROOK90 is applied for each of the unique hydrotops. Finally, all queried hydrological variables (i.e. soil moisture, discharge, transpiration fluxes) for unique hydrotops as well as catchment averages (using an area-weighted mean) and stored together with time-series plots in the output folder.

Due to significant computational time requirements (especially for the retrieval of meteorological data and the number of necessary model runs), scope and limitations of BROOK90 itself the main applicability of the framework is expected to be limited to small catchments (<500 km²) or single sites.

Currently, a validation of the package and the global parameterization is being conducted using discharge data from small catchments with at least 5-year-length time-series (Global Runoff Database5) and evapotranspiration data from meteorological towers measured by eddy covariance (FLUXNET6 network) which are located in various climatic zones all over the globe.







How to cite: Vorobevskii, I. and Kronenberg, R.: ‘Drop a catchment and receive model output’: introduction to an open-source R-Package to model the water balance wherever you want, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2767, https://doi.org/10.5194/egusphere-egu2020-2767, 2020.

D3915 |
Edwin Sutanudjaja, Egbert Gramsbergen, Paula Martinez Lavanchy, Annemiek van der Kuil, Jan van der Heul, Vincent Brunst, Otto Lange, Oliver Schmitz, and Niko Wanders

PCR-GLOBWB (Sutanudjaja et al., 2018, https://doi.org/10.5194/gmd-11-2429-2018, https://github.com/UU-Hydro/PCR-GLOBWB_model) is an open source global hydrology and water resources model being developed over the past two decades at the Department of Physical Geography, Utrecht University, The Netherlands. The latest version of the model has a fine spatial resolution of 5 arcmin (less than 10 km at the equator) and runs on daily resolution covering several decade simulation time period, i.e. > 50 years. Due to its fine resolution and extensive spatio-temporal extent, the total size of a complete set of PCR-GLOBWB input files is huge (about 250 GB if they are uncompressed; 45 GB if compressed, see e.g. https://doi.org/10.5281/zenodo.1045338). Consequently, sharing and downloading them are difficult, even for a user that wants to run the model for a limited and specific catchment area only. 

In this presentation we aim to share our recent successful effort to prepare and upload PCR-GLOBWB input files to the 4TU.ResearchData server, https://opendap.4tu.nl, that supports OPeNDAP protocol (https://www.opendap.org) allowing users to access files from a remote server without the need to download the data files. This includes inspection of the metadata enabling subsampling specific ranges of the data (over space and time). OPeNDAP is especially suited to netCDF files, and, therefore, we have ensured compatibility of PCR-GLOBWB input files in the correct netCDF format, i.e. following CF conventions, before uploading the files to the remote server. 

The PCR-GLOBWB input files are now available on https://opendap.4tu.nl/thredds/catalog/data2/pcrglobwb/catalog.html. PCR-GLOBWB users can run the model by simply adjusting the input directory location to this address (and, therefore, without having to download the entire input files). In this presentation, we aim to demonstrate on how to make such runs, not only for global extent, but also for specific or limited regions only (river basin extent).

How to cite: Sutanudjaja, E., Gramsbergen, E., Martinez Lavanchy, P., van der Kuil, A., van der Heul, J., Brunst, V., Lange, O., Schmitz, O., and Wanders, N.: OPeNDAP-based access for PCR-GLOBWB input files, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4804, https://doi.org/10.5194/egusphere-egu2020-4804, 2020.

D3916 |
Marco Dal Molin, Dmitri Kavetski, and Fabrizio Fenicia

Hydrological models represent a fundamental tool for linking data with theories in scientific studies. Conceptual models are among the most frequently used type of models in catchment scale studies, due to their low computational requirements and ease of interpretation. Model selection requires the comparison of model alternatives, which is complicated by differences in conceptualization, implementation, and source code availability of the models present in the literature. For this reason, several model-building frameworks have been introduced in the last decade, which facilitate model comparisons by enabling different model alternatives within the same software and numerical architecture. These frameworks, however, have their own limitations, including the difficulty of extension from a user perspective, the requirement of long set-up procedures, and the need of customized input files.
Building on the decennial experience with the development and usage of Superflex, a flexible modeling framework for conceptual model building, so far implemented in FORTRAN language and not available as open source, we propose SuperflexPy, an open source Python framework for building conceptual hydrological models. SuperflexPy allows the user to build fully customized models using generic elements (i.e. reservoirs, splitters, junctions, lag functions, etc.) and to arrange them as desired, for example to reflect lumped or semi-distributed model configurations. SuperflexPy is easy to configure through modular initialization scripts, easy to extend with custom functionalities, and easy to interface with other frameworks, making it an essential element for creating a continuous and reproducible pipeline that goes from raw data to model results and interpretation.
In this presentation, we will introduce this framework, showcasing some applications and highlighting its potential in the context of open science.

How to cite: Dal Molin, M., Kavetski, D., and Fenicia, F.: SuperflexPy: a new open source framework for building conceptual hydrological models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5110, https://doi.org/10.5194/egusphere-egu2020-5110, 2020.

D3917 |
Gemma Coxon, Nans Addor, Camila Alvarez-Garreton, Hong X. Do, Keirnan Fowler, and Pablo A. Mendoza

Large-sample hydrology (LSH) relies on data from large sets (tens to thousands) of catchments to go beyond individual case studies and derive robust conclusions on hydrological processes and models and provide the foundation for improved understanding of the link between catchment characteristics, climate and hydrological responses. Numerous LSH datasets have recently been released, covering a wide range of regions and relying on increasingly diverse data sources to characterize catchment behaviour. These datasets offer novel opportunities for open hydrology, yet they are also limited by their lack of comparability, accessibility, uncertainty estimates and characterization of human impacts.

Here, we underscore the key role of LSH datasets in open hydrologic science and highlight their potential to enhance the transparency and reproducibility of hydrological studies.  We provide a review of current LSH datasets and identify their limitations, including the current difficulties of inter-dataset comparison and limited accessibility of hydrological observations. To overcome these limitations, we propose simple guidelines alongside long-term coordinated actions for the community, which aim to standardize and automatize the creation of LSH datasets worldwide. This presentation will highlight how, by producing and using common LSH datasets, the community can increase the comparability and reproducibility of hydrological research.

This research was performed as part of the Panta Rhei Working Group on large-sample hydrology and is based on https://doi.org/10.1080/02626667.2019.1683182.

How to cite: Coxon, G., Addor, N., Alvarez-Garreton, C., Do, H. X., Fowler, K., and Mendoza, P. A.: Large-sample hydrology to foster open and collaborative research: a review of recent progress and grand challenges, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6720, https://doi.org/10.5194/egusphere-egu2020-6720, 2020.

D3918 |
Nans Addor, Martyn P. Clark, and Brian Henn

Hydrological models (HMs) are essential tools to explore terrestrial water dynamics and to anticipate future hydrological events. Since their inception, HMs have been developed in parallel by different institutions. There is now a plethora of HMs, yet a relative absence of cross-model developments (code is almost never portable between models) and of guidance on model selection (modellers typically stick to the model they are most familiar with). Furthermore, traditional HMs, developed over the last decades by successive code additions, are rarely adapted to modern hydrological challenges, principally because they lack modularity. These HMs typically rely on a single model structure (most processes are simulated by a single set of equations), which make it difficult to i) understand differences between models, ii) run a large ensemble of models, iii) capture the spatial variability of hydrological processes and iv) develop and improve hydrological models in a coordinated fashion across the community.

These limitations can be overcome by modular modelling frameworks (MMFs), which are master templates for model generation. MMFs offer several options for each important modelling decision. They also allow users to add functionalities when they are required, by loading libraries developed and maintained by the community. This presentation uses FUSE (Framework for Understanding Structural Error) as an example of MMF for hydrology. FUSE enables the generation of a myriad of conceptual HMs by recombining elements from four commonly-used models. This presentation will summarize the development of FUSE version 2 (FUSE2), which was created with users in mind and significantly increases the usability and range of applicability of the original FUSE. In FUSE2, NetCDF output files contain a detailed description of the modelling decisions (e.g., selected modules, numerical scheme, parameter values), which improves reproducibility. FUSE2 also makes code re-usable, as modules can be used across the community and are not limited to a single model structure. After decades of siloed model development, we argue that MMFs are essential to develop and improve hydrological models in a coordinated fashion across the community.

How to cite: Addor, N., Clark, M. P., and Henn, B.: The emergence of community models in hydrology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7349, https://doi.org/10.5194/egusphere-egu2020-7349, 2020.

D3919 |
Stan Schymanski and Jiří Kunčar

Scientific theory is commonly formulated in the form of mathematical equations and new theory is often derived from a set of pre-existing equations. Most of us have experienced difficulty in following mathematical derivations in scientific publications and even more so their transfer into numerical algorithms that eventually result in quantitative tests and data plots. The Python package Environmental Science using Symbolic Math (ESSM, https://github.com/environmentalscience/essm) offers an open and transparent way to (a) verify derivations in the literature, (b) ensure dimensional consistency of the equations, (c) perform symbolic derivations, and (d) transfer mathematical equations into numerical code, perform computations and (e) generate plots.

Here we present an example workflow using jupyter notebooks illustrating the capabilities of the package from (a) to (e), including recently added advanced features.

How to cite: Schymanski, S. and Kunčar, J.: Open and reproducible science: from theory to equations, algorithms and plots, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9177, https://doi.org/10.5194/egusphere-egu2020-9177, 2020.

D3920 |
Remko Nijzink, Chandrasekhar Ramakrishnan, Rok Roskar, and Stan Schymanski

Numerical experiments become more and more complex, resulting in workflows that are hard to repeat or reproduce. Even though many journals and funding agencies now require open access to data and model code, the linkages between these elements are often still poorly documented or even completely missing. The software platform Renku (https://renkulab.io/), developed by the Swiss Data Science Center, aims at improving reproducibility and repeatability of the entire scientific workflow. Data, scripts and code are stored in an online repository, and Renku records explicitly all the steps from data import to the generation of final plots, in the form of a knowledge graph. In this way, all output files have a history attached, including linkages to scripts and input files used generate them. Renku can visualize the knowledge graph, to show all scientific links between inputs, outputs, scripts and models. It enables easy re-use and reproduction of the entire workflow or parts thereof.

In the test case presented here, the Vegetation Optimality Model (VOM, Schymanski et al., 2009) is applied along six study sites of the North-Australian Tropical Transect to simulate observed canopy-atmosphere exchange of water and carbon dioxide. The VOM optimizes vegetation properties, such as rooting depths and canopy properties, in order to maximize the Net Carbon Profit, i.e. the total carbon taken up by photosynthesis minus all the carbon costs of the plant organs involved. The vegetation is schematized as one big leaf for trees and one leaf for seasonal grasses, and is combined with a water balance model. Flux tower measurements of evaporation and CO2-assimilation, and remotely sensed vegetation cover are used for model evaluation, in addition to meteorological data as input for the model. A numerical optimization, the Shuffled Complex Evolution, is used to optimize the vegetation properties for each individual site by repeatedly running the model with different parametrizations and computing the net carbon profit over 20 years. The optimization was repeated several times for each site to analyze the sensitivity of the results to a range of different input parameters.

This case demonstrates an example of a complex numerical experiment with all its associated challenges concerning documenting model choices, large datasets and a variety of pre- and post- processing steps. Renku assured the repeatability and reproducibility of this experiment, by documenting this in a proper and systematic way. We demonstrate how Renku helped us to repeat analyses and update results, and we will present the knowledge graph of this experiment.

Schymanski, S.J., Sivapalan, M., Roderick, M.L., Hutley, L.B., Beringer, J., 2009. An optimality‐based model of the dynamic feedbacks between natural vegetation and the water balance. Water Resources Research 45. https://doi.org/10.1029/2008WR006841

How to cite: Nijzink, R., Ramakrishnan, C., Roskar, R., and Schymanski, S.: A repeatable and reproducible modelling workflow using the Vegetation Optimality Model and RENKU, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9228, https://doi.org/10.5194/egusphere-egu2020-9228, 2020.

D3921 |
Niels Drost, Jaro Camphuijsen, Rolf Hut, Nick Van De Giesen, Ben van Werkhoven, Jerom P.M. Aerts, Inti Pelupessy, Berend Weel, Stefan Verhoeven, Ronald van Haren, Eric Hutton, Maarten van Meersbergen, Fakhereh Alidoost, Gijs van den Oord, Yifat Dzigan, Bouwe Andela, and Peter Kalverla

The eWaterCycle platform is a fully Open-Source platform built specifically to advance the state of FAIR and Open Science in Hydrological Modeling.

eWaterCycle builds on web technology, notebooks and containers to offer an integrated modelling experimentation environment for scientists. It allows scientists to run any supported hydrological model with ease, including setup and preprocessing of all data required. 

eWaterCycle comes with an easy to use explorer, so the user can get started with the system in minutes, and uniquely lets the user generate a hydrological model notebook based on their preferences.

The eWaterCycle platform uses Jupyter as the main interface for scientific work to ensure maximum flexibility. Common datasets such as ERA-Interim and ERA-5 forcing data and observations for verification of model output quality are available for usage by the models.

To make the system capable of running any hydrological model we use docker containers coupled through gRPC. This allows us to support models in a multitude of languages, and provide fully reproducible model experiments.

Based on experiences during a FAIR Hydrological Modeling workshop in Leiden in April 2019 we have created a common pre-processing system for Hydrological modeling, based on technology from the climate sciences, in particular ESMValTool and Iris. This pre-processing pipeline can create input for a number of Hydrological models directly from the source dataset such as ERA-Interim in a fully transparent and reproducible manner.

During this pico presentation, we will explain how this platform supports creating reproducible results in an easy to use fashion.

How to cite: Drost, N., Camphuijsen, J., Hut, R., Van De Giesen, N., van Werkhoven, B., Aerts, J. P. M., Pelupessy, I., Weel, B., Verhoeven, S., van Haren, R., Hutton, E., van Meersbergen, M., Alidoost, F., van den Oord, G., Dzigan, Y., Andela, B., and Kalverla, P.: The eWaterCycle platform for FAIR and Open Hydrological Modeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11495, https://doi.org/10.5194/egusphere-egu2020-11495, 2020.

D3922 |
Eric Hutton, Mark Piper, Tian Gan, and Greg Tucker

The hydrologic modeling and data community has embraced the open source movement as evidenced by the ever increasing number of FAIR models and datasets available to investigators. Although this has resulted in new science through innovative model application, development, and coupling, the idiosyncratic design of many of these models and datasets acts as a speed bump that slows the time-to-science.

The Basic Model Interface version 2.0 (BMI) specification lowers this hurdle by defining a standardized interface for both models and data. This allows all models and datasets with a BMI to look alike, regardless of their underlying implementation or, in fact, even if they are truly a model or a dataset. With idiosyncratic implementation details obscured, models and data are more easily and quickly picked up and used - if you know how to use one BMI model, you know how to use any BMI model.

In addition, a common interface allows models and data to more easily be brought into a single framework in which they can be queried, run, coupled, and analyzed using a standard set of tools. The Community Surface Dynamics Modeling System (CSDMS) has developed such a modeling framework, the Python Modeling Toolkit (pymt). Although this framework was initially written for the coupling of BMI-enabled numerical models, we have extended it to include BMI-enabled datasets as well. Within such a framework, investigators are able, in a reproducible way, to: compare models to one another using a common dataset, validate models to data, ingest data into a model, swap models and data within a workflow.

As a demonstration of model-data coupling within the pymt, we present examples where BMI-enabled datasets (e.g. USGS gage data, the Operational National Hydrologic Model, NOAA’s National Water Model) are used to drive hydrologic models (e.g. FaSTMECH, PRMS).

How to cite: Hutton, E., Piper, M., Gan, T., and Tucker, G.: The Basic Model Interface 2.0: A standard interface for coupling numerical models and data in the hydrologic sciences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12488, https://doi.org/10.5194/egusphere-egu2020-12488, 2020.

D3923 |
Raoul Collenteur, Matevz Vremec, and Giuseppe Brunetti

HYDRUS-1D is a popular software suite for one-dimensional modeling of flow and transport through the vadose zone [1]. Models can be handled through the Graphical User Interface (GUI), made freely available by the original authors (https://www.pc-progress.com/). As the program is file-based, the HYDRUS-1D GUI already ensures a certain degree of reproducibility, as these files contain all information about a model. The original FORTRAN code of the HYDRUS-1D model is also made available and is used in many publications to perform more complicated analysis of flow and transport through the unsaturated zone. For each of these publications new code was programmed to change the input files and perform a specific analysis. Being a popular hydrological model, it seems only logical to start reusing such code and structurally develop its capabilities. In the presentation, we introduce Phydrus, an open source Python package to create, optimize and visualize HYDRUS-1D models. Python scripts or Jupyter Notebooks are used for all steps of the modeling process, documenting the entire workflow and ensuring reproducibility of the analysis. Connecting HYRDUS-1D to Python makes it easier to perform repetitive tasks on models, and potentially opens up a whole new set of possibilities and applications. While introducing Phydrus, this presentation will also focus on the process of creating the Python Package and why we think it is worthwhile for the hydrologic community to interface existing (older) code with newer programming languages popular in the hydrological scientific community.

[1] Šimůnek, J. and M. Th. van Genuchten (2008) Modeling nonequilibrium flow and transport with HYDRUS, Vadose Zone Journal.

How to cite: Collenteur, R., Vremec, M., and Brunetti, G.: Interfacing FORTAN Code with Python: an example for the Hydrus-1D model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15377, https://doi.org/10.5194/egusphere-egu2020-15377, 2020.

D3924 |
Marcus Strobl, Elnaz Azmi, Sibylle K. Hassler, Mirko Mälicke, Jörg Meyer, and Erwin Zehe

V-FOR-WaTer, as a virtual research environment, wants to simplify data access for environmental sciences, foster data publications and facilitate preparation of data and their analyses with a comprehensive toolbox. A large number of datasets, covering a wide range of spatial and temporal resolution, is still hardly accessible for others than the original data collector. Frequently these datasets are stored on local storage devices. By giving scientists from universities and state offices open access to data, appropriate pre-processing and analysis tools and workflows, we accelerate scientific work and facilitate the reproducibility of analyses.

The prototype of the virtual research environment was developed during the last three years. Today it consists of a database with a detailed metadata scheme that is adapted to water and terrestrial environmental data and compliant with international standards (INSPIRE, ISO19115). Data in the web portal originate from university projects and state offices. The connection of V-FOR-WaTer to established repositories, like the GFZ Data Services, is work in progress. This will simplify both, the process of accessing publicly available datasets and publishing the portal users’ data, which is increasingly demanded by journals and funding organisations.

The appearance of the web portal is designed to reproduce typical workflows in environmental sciences. A filter menu, based on the metadata, and a graphical selection on the map gives access to the data. A workspace area provides tools for data pre-processing, scaling, common hydrological applications and more specific tools, e.g. geostatistics. The toolbox is easily extendable due to the modular design of the system and will ultimately also include user-developed tools. The selection of the tools is based on current research topics and methodologies in the hydrology community. They are implemented as Web Processing Services (WPS); hence, the tool executions can be joined with one another and saved as workflows, enabling more complex analyses and reproducibility of the research.

How to cite: Strobl, M., Azmi, E., Hassler, S. K., Mälicke, M., Meyer, J., and Zehe, E.: V-FOR-WaTer – a virtual research environment to access and process environmental data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15488, https://doi.org/10.5194/egusphere-egu2020-15488, 2020.

D3925 |
Menno Straatsma, Edwin Sutanudjaja, and Oliver Schmitz

The World Economic Forum ranked extreme weather events, natural disasters, and failure of climate-change mitigation and adaptation in the top five risks in terms of likelihood as well as in terms of environmental and socioeconomic impact. Managing and adapting densely populated fluvial areas to the combined impacts therefore presents a major challenge for their sustainable development in this century. Common landscaping measures need to be evaluated to compensate for changes in discharge or sea level rise, for example, floodplain lowering, side channel recreation, embankment relocation, roughness lowering, groyne lowering, or removal of minor embankments. Decisions for adaptations require an overview of cost and benefits, and the number of stakeholders involved. For a rational and convincing decision-making process it is desired that stakeholders and planning professionals get easy access to source data, model code, intervention plans and their evaluation.

We used a set of open-source models and software packages to create an interactive tool enabling the exploration of possible futures of fluvial areas in a quantitative manner. The measures are planned and evaluated using RiverScape (Straatsma, 2019) and implemented in the spatio-temporal modelling environment PCRaster (http://www.pcraster.eu). For the seamless integration of explanatory text, user-defined parameterization of measures, executing RiverScape model code, and interactive visualization of spatial data we use Jupyter Notebooks (https://jupyter.org/). The notebooks provide an interactive working and teaching environment for integral river management, where professionals, stakeholders or scholars can explore different measures from different disciplinary backgrounds: flood hazard reduction, biodiversity, vegetation succession, and implementation costs. In our presentation we illustrate our integral river management workflow of creating own measures, evaluating them in isolation, and interpreting the results by example of the Waal River in the Netherlands.


Straatsma, M. W., Fliervoet, J. M., Kabout, J. A. H., Baart, F., and Kleinhans, M. G.: Towards multi-objective optimization of large-scale fluvial landscaping measures, Nat. Hazards Earth Syst. Sci., 19, 1167–1187, https://doi.org/10.5194/nhess-19-1167-2019, 2019.

How to cite: Straatsma, M., Sutanudjaja, E., and Schmitz, O.: Interactive exploration of fluvial futures, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18922, https://doi.org/10.5194/egusphere-egu2020-18922, 2020.

D3926 |
Paul Smith, Keith Beven, Ann Kretzschmar, and Nick Chappell

At a minimum reproducible research requires the use of models with strict version control and documented end points (e.g executable calls) so that simulations can be repeated with (hopefully) identical code and data.

Opening the research process beyond this requires that both the model source code and documentation can be scrutinised. Achieving this in a meaningful way means going beyond documentation on the code structure, installation and use. Since models are only approximations of physical systems it is important that users appreciate their limitations and are thoughtful in their use. It is therefore suggested that integration of a model into the scientific process requires developers to go further by:

  1. Documenting, in a way that be directly related to the code, the underlying equations and solutions used by the model and their motivation.
  2. Automating simple reproducible tests on components of the model across a range of dynamic situations beyond those expected.
  3. Providing reproducible case studies highlighting good practice and limitations of the model which can be used both to allow users to access the applicability of the model and to evaluate model changes.

We look at an implementation of these ideas with regards to the ongoing development of Dynamic TOPMODEL. We highlight challenges at both the technical and administrative level and outline how we are addressing them at https://waternumbers.github.io/dynatop/.

How to cite: Smith, P., Beven, K., Kretzschmar, A., and Chappell, N.: Developing and documenting a Hydrological Model for reproducible research: A new version of Dynamic TOPMODEL, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20790, https://doi.org/10.5194/egusphere-egu2020-20790, 2020.