EGU2020-8079
https://doi.org/10.5194/egusphere-egu2020-8079
EGU General Assembly 2020
© Author(s) 2020. This work is distributed under
the Creative Commons Attribution 4.0 License.

The role of precipitation in hydrological model uncertainty

András Bárdossy1, Chris Kilsby2, Faizan Anwar1, and Ning Wang1
András Bárdossy et al.
  • 1University of Stuttgart, Institute for Modelling Hydraulic and Environmental Systems, Stuttgart, Germany (bardossy@iws.uni-stuttgart.de)
  • 2Newcastle University, School of Engineering

Rainfall-runoff models produce outputs which differ from observations due to uncertainties in process description, process parametrization, uncertainties in observations and changing spatio-temporal variability of input and state variables. Traditionally, attention has been focused mostly on process parameters to quantify runoff uncertainty using e.g. GLUE.

Here we have focused on the role of precipitation uncertainty relating to discharge. For this purpose, we used an inverse model approach. We generated time series of daily precipitation with high spatial resolution  using a modified version of Random Mixing and the Shannon-Whittaker interpolation to improve simulated runoff using the SHETRAN (physically-based) and HBV (conceptual) models, both spatially distributed for various sub-catchments of the Neckar River in Germany.  HBV was initially calibrated using interpolated precipitation, while SHETRAN uses pre-defined parameters. The modelling goal was to find a spatio-temporal series of precipitation which improved the predicted runoff,  under the constraints that the precipitation values be the same at the measurement locations and share their spatial variability with the observations at a given step. Care was taken to select subsequent days for improvement such that the previously improved step considered the effect of the previous steps.

We asked the questions: i) does improving precipitation inputs for one sub-catchment bring runoff improvement for the others? ii) Can the improved precipitation using SHETRAN be used for HBV and still get runoff improvements as compared to the interpolated precipitation and vice versa?

Results showed that overall runoff errors were reduced by 40 to 50% for all sub-catchments. For the peaks, a reduction of 70 to 90% was observed. As compared with the interpolated fields, new fields showed similar overall distribution but different details at finer spatial scales. Swapping improved precipitations between SHETRAN and HBV showed improvement as compared with the discharge from interpolated precipitation.

How to cite: Bárdossy, A., Kilsby, C., Anwar, F., and Wang, N.: The role of precipitation in hydrological model uncertainty, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8079, https://doi.org/10.5194/egusphere-egu2020-8079, 2020

Display materials

Display file

Comments on the display material

AC: Author Comment | CC: Community Comment | Report abuse

Display material version 1 – uploaded on 03 May 2020
  • CC1: Comment on EGU2020-8079, Tobias Pilz, 06 May 2020

    Dear authors,

    thank you for your interesting approach and presentation. I have some questions.

    1) You write parameters should not compensate for observation errors. But now with your approach the optimized rainfall field possibly compensates for erroneous parameters and / or model structures. What about accounting for all such sources of uncertainty, e.g. rainfall fields, parameters, and (e.g. when using flexible models with exchangable process representations) model structures? Or would that blow up the problem too much (too many degrees of freedom, computational feasibility, ...)?

    2) Sorry, I didn't really understand the details of your implementation. Does one have to apply the method, i.e. the actual calibration of precipitation fields, over the full simulation period or is it possible to split it into a calibration (e.g. where parameters are assessed to derive the precipitation field for each time step based on observed precipitation) and simulation / validation period?

    3) With your approach, how many degrees of freedom are involved, i.e. what are the chances of overfitting?

    4) What about conevctive rainfall events (thunderstorms) that lead to observed discharge peaks that are usually problematic to simulate? Will these be somehow randomly generated by the algorithm to improve the fit in discharge simulations even when they are undetected by rainfall stations?

    Kind regards,
    Tobias

    • AC1: Reply to CC1, András Bárdossy, 06 May 2020

      Dear Tobias,

      thank you for your questions and remarks. Here are my answers:

      1) You write parameters should not compensate for observation errors. But now with your approach the optimized rainfall field possibly compensates for erroneous parameters and / or model structures. What about accounting for all such sources of uncertainty, e.g. rainfall fields, parameters, and (e.g. when using flexible models with exchangable process representations) model structures? Or would that blow up the problem too much (too many degrees of freedom, computational feasibility, ...)?

      Obviously we do have a very high degree of freedom if we take precipitation uncertainty into account the way we suggested. But if you just ignore it you may try to include a different process description to compensate for precipitation errors. If you try to calibrate your model based on estimated precip using different precipitation stations you often see big differences in the final parameters. Obviously in our work we chose an extreme approach to quantify how much of the error maybe due to precipitation uncertainty. The finally suggested approach would be to take both kind of uncertainties into account. That our approach is not "absurd overfitting" can be recognized from the facts:

      A: The inversion for one model improves the performance of another model on the same catchment

      B: Th inversion on one model parameter set improves the performance of other parameter sets on the same catchment

      C: The inversion of the model for one catchment lead to an improvement of the model on a close not inverted catchment.

       

      2) Sorry, I didn't really understand the details of your implementation. Does one have to apply the method, i.e. the actual calibration of precipitation fields, over the full simulation period or is it possible to split it into a calibration (e.g. where parameters are assessed to derive the precipitation field for each time step based on observed precipitation) and simulation / validation period?

      The inversion was done using a calibrated HBV and a literature parameter based SHETRAN. The model parameters were not changed subsequently. A set of intense precipitation days was selected. These were simultaneously modified and the overall performance was calculated. The procedure is rather complicated and requires a lot of comutational resources (1 inversion for a catchment requires about 10 days of calculatations on a cluster using 20 CPUs). I can send you a more detailed description.  We did this large experiment to show how important precipitation uncertainty can be, and that ignoring it might be a serious problem.

      3) With your approach, how many degrees of freedom are involved, i.e. what are the chances of overfitting?

      All fields were constrained on all available observations their distributions and all variograms. This means that for this rather dense network the problem is very strongly constrained (in our case we constrained each day on more than 100 stations). You can see on the figure that all inversions and the interpolation are similar. Of course we may overfit, but as the transfer to other models (see above) was good the overfit was limited.

       

      4) What about conevctive rainfall events (thunderstorms) that lead to observed discharge peaks that are usually problematic to simulate? Will these be somehow randomly generated by the algorithm to improve the fit in discharge simulations even when they are undetected by rainfall stations?

      As we use the distribution of the precipitation amounts measured in and around the catchment convective events which were detected somewhere may have a good inverse. If all stations missed the events there is no chance to improve it by the present approach.

      Regards

                    Andras