HS1.3.4 | Revisiting good modelling practices – where are we today and where to tomorrow?
EDI
Revisiting good modelling practices – where are we today and where to tomorrow?
Convener: Diana SpielerECSECS | Co-conveners: Keirnan Fowler, Zhenyu Wang, Wouter Knoben

Many papers have advised on careful consideration of the approaches and methods we choose for our hydrological modelling studies as they potentially affect our modelling results and conclusions. However, there is no common and consistently updated guidance on what good modelling practice is and how it has evolved since e.g. KlemeŠ (1986), Refsgaard & Henriksen (2004) or Jakeman et al. (2006). In recent years several papers have proposed useful practices such as benchmarking (e.g. Seibert et al., 2018), controlled model comparison (e.g. Clark et al., 2011), careful selection of calibration periods (e.g. Motavita et al., 2019) and methods (e.g. Fowler et al., 2018 ), or testing the impact of subjective modelling decisions along the modelling chain (Melsen et al., 2019). However, despite their very justified existence, none of the proposed methods have become quite as common and indispensable as the split sample test (KlemeŠ, 1986) and its generalisation to cross-validation.

This session intends to provide a platform for a visible and ongoing discussion on what ought to be the current standard(s) for an appropriate modelling protocol that considers uncertainty in all its facets and promotes transparency in the quest for robust and reliable results. We aim to bring together, highlight and foster work that develops, applies, or evaluates procedures for a trustworthy modelling workflow or that investigates good modelling practices for particular aspects of the workflow. We invite research that aims to improve the scientific basis of the entire modelling chain and puts good modelling practice in focus again. This might include (but is not limited to) contributions on:

(1) Benchmarking model results
(2) Developing robust calibration and evaluation frameworks
(3) Going beyond common metrics in assessing model performance and realism
(4) Conducting controlled model comparison studies
(5) Developing modelling protocols and/or reproducible workflows
(6) Examples of adopting the FAIR (Findable, Accessible, Interoperable and Reusable) principles in the modelling chain
(7) Investigating subjectivity and documenting choices along the modelling chain and
(8) Uncertainty propagation along the modelling chain
(9) Communicating model results and their uncertainty to end users of model results
(10) Evaluating implications of model limitations and identifying priorities for future model development and data acquisition planning

Many papers have advised on careful consideration of the approaches and methods we choose for our hydrological modelling studies as they potentially affect our modelling results and conclusions. However, there is no common and consistently updated guidance on what good modelling practice is and how it has evolved since e.g. KlemeŠ (1986), Refsgaard & Henriksen (2004) or Jakeman et al. (2006). In recent years several papers have proposed useful practices such as benchmarking (e.g. Seibert et al., 2018), controlled model comparison (e.g. Clark et al., 2011), careful selection of calibration periods (e.g. Motavita et al., 2019) and methods (e.g. Fowler et al., 2018 ), or testing the impact of subjective modelling decisions along the modelling chain (Melsen et al., 2019). However, despite their very justified existence, none of the proposed methods have become quite as common and indispensable as the split sample test (KlemeŠ, 1986) and its generalisation to cross-validation.

This session intends to provide a platform for a visible and ongoing discussion on what ought to be the current standard(s) for an appropriate modelling protocol that considers uncertainty in all its facets and promotes transparency in the quest for robust and reliable results. We aim to bring together, highlight and foster work that develops, applies, or evaluates procedures for a trustworthy modelling workflow or that investigates good modelling practices for particular aspects of the workflow. We invite research that aims to improve the scientific basis of the entire modelling chain and puts good modelling practice in focus again. This might include (but is not limited to) contributions on:

(1) Benchmarking model results
(2) Developing robust calibration and evaluation frameworks
(3) Going beyond common metrics in assessing model performance and realism
(4) Conducting controlled model comparison studies
(5) Developing modelling protocols and/or reproducible workflows
(6) Examples of adopting the FAIR (Findable, Accessible, Interoperable and Reusable) principles in the modelling chain
(7) Investigating subjectivity and documenting choices along the modelling chain and
(8) Uncertainty propagation along the modelling chain
(9) Communicating model results and their uncertainty to end users of model results
(10) Evaluating implications of model limitations and identifying priorities for future model development and data acquisition planning