Displays

HS3.1

Hydroinformatics has emerged over the last decades to become a recognised and established field of independent research within the hydrological sciences. Hydroinformatics is concerned with the development and hydrological application of mathematical modelling, information technology, systems science and computational intelligence tools. We also have to face the challenges of Big Data: large data sets, both in size and complexity. Methods and technologies for data handling, visualization and knowledge acquisition are more and more often referred to as Data Science.

The aim of this session is to provide an active forum in which to demonstrate and discuss the integration and appropriate application of emergent computational technologies in a hydrological modelling context. Topics of interest are expected to cover a broad spectrum of theoretical and practical activities that would be of interest to hydro-scientists and water-engineers. The main topics will address the following classes of methods and technologies:

* Predictive and analytical models based on the methods of statistics, computational intelligence, machine learning and data science: neural networks, fuzzy systems, genetic programming, cellular automata, chaos theory, etc.
* Methods for the analysis of complex data sets, including remote sensing data: principal and independent component analysis, time series analysis, information theory, etc.
* Specific concepts and methods of Big Data and Data Science
* Optimisation methods associated with heuristic search procedures: various types of genetic and evolutionary algorithms, randomised and adaptive search, etc.
* Applications of systems analysis and optimisation in water resources
* Hybrid modelling involving different types of models both process-based and data-driven, combination of models (multi-models), etc.
* Data assimilation and model reduction in integrated modelling
* Novel methods of analysing model uncertainty and sensitivity
* Software architectures for linking different types of models and data sources

Applications could belong to any area of hydrology or water resources: rainfall-runoff modelling, flow forecasting, sedimentation modelling, analysis of meteorological and hydrologic data sets, linkages between numerical weather prediction and hydrologic models, model calibration, model uncertainty, optimisation of water resources, etc.

Share:
Co-organized by NH1/NP1
Convener: Dimitri Solomatine | Co-conveners: Ghada El Serafy, Amin Elshorbagy, Dawei Han, Adrian Pedrozo-Acuña
Displays
| Attendance Tue, 05 May, 08:30–12:30 (CEST)

Files for download

Download all presentations (65MB)

Chat time: Tuesday, 5 May 2020, 08:30–10:15

Chairperson: Dimitri Solomatine
D114 |
EGU2020-2683
Lewis Sampson, Jose M. Gonzalez-Ondina, and Georgy Shapiro

Data assimilation (DA) is a critical component for most state-of-the-art ocean prediction systems, which optimally combines model data and observational measurements to obtain an improved estimate of the modelled variables, by minimizing a cost function. The calculation requires the knowledge of the background error covariance matrix (BECM) as a weight for the quality of the model results, and an observational error covariance matrix (OECM) which weights the observational data.

Computing the BECM would require knowing the true values of the physical variables, which is not feasible. Instead, the BECM is estimated from model results and observations by using methods like National Meteorological Centre (NMC) or the Hollingsworth and Lönnberg (1984) (H-L). These methods have some shortcomings which make them unfit in some situations, which includes being fundamentally one-dimensional and making a suboptimal use of observations.

We have produced a novel method for error estimation, using an analysis of observations minus background data (innovations), which attempts to improve on some of these shortcomings. In particular, our method better infers information from observations, requiring less data to produce statistically robust results. We do this by minimizing a linear combination of functions to fit the data using a specifically tailored inner product, referred to as an inner product analysis (IPA).

We are able to produce quality BECM estimations even in data sparse domains, with notably better results in conditions of scarce observational data. By using a sample of observations, with decreasing sample size, we show that the stability and efficiency of our method, when compared to that of the H-L approach, does not deteriorate nearly as much as the number of data points decrease. We have found that we are able to continually produce error estimates with a reduced set of data, whereas the H-L method will begin to produce spurious values for smaller samples.

Our method works very well in combination with standard tools like NEMOVar by providing the required standard deviations and length-scales ratios. We have successfully ran this in the Arabian Sea for multiple seasons and compared the results with the H-L (in optimal conditions, when plenty of data is available), spatially the methods perform equally well. When we look at the root mean square error (RMSE) we see very similar performances, with each method giving better results for some seasons and worse for others.

How to cite: Sampson, L., Gonzalez-Ondina, J. M., and Shapiro, G.: An improved variational Data Assimilation method for ocean models with limited number of observations. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2683, https://doi.org/10.5194/egusphere-egu2020-2683, 2020.

D115 |
EGU2020-17809
Vanessya Laborie, Nicole Goutal, and Sophie Ricci

In the context of the development and implementation of data assimilation techniques in Gironde estuary for flood forecasting, a Telemac 2D model is used to calculate water depths and velocity fields at each node of an unstructured mesh. Upstream, the model boundaries are respectively La Réole and Pessac on the Garonne and Dordogne rivers. The maritime boundary is 32 km off the mouth of Gironde estuary, located in Verdon. This model, which contains 7351 nodes and 12838 finite elements, does not take into account overflows. It was calibrated over 4 non-overflowing events and validated over 6 overflowing events.

Uncertainty in hydraulic parameters as well as fluvial and maritime boundary conditions are quantified and reduced in this study. It is assumed that time-varying functional uncertainty in boundary conditions is well approximated by a Gaussian Process characterized by an autocorrelation function and an associated correlation length scale. The coefficients of the truncated Karhunen-Loève decomposition of this process are further considered in the control vector, together with the friction coefficients and wind influence factor, of Global Sensitivity Analysis based on variances decomposition to quantify uncertainty and an Ensemble Kalman Filter to reduce uncertainty. The performance of the data assimilation strategy in terms of control vector composition, length and cycling of the data assimilation window, size of the ensemble and mesh, was assessed on synthetical and real experiments.

It was shown that uncertainty in water level predominantly stems from uncertainty in the maritime boundary condition and the friction coefficient in the mouth and in the central part of the estuary. Synthetical experiments showed that data assimilation succeeds in identifying time varying friction following tidal signal, as well as reconstructing the time-dependent maritime forcing even though the KL coefficients identification suffers equifinality. A resampling method based on the persistence of the initial background covariance matrix is used to avoid well-known ensemble collapse in the Ensemble Kalman Filter. Difficulties in estimating the friction parameter of the confluence zone, where the flows are the result of non-linear physical processes, were highlighted. Also, the equifinality problem for identification of the KL coefficients in the boundary conditions was shown to be enhanced, nevertheless, leading to the proper reconstruction of the maritime forcing and consequently to the expected water level in the estuary. In the real experiment, it was shown that water levels are significantly improved with error smaller than 10cm, along the estuary, except in the upstream sections of the Garonne and Dordogne rivers where model refinement should be improved.

KEY WORDS

2D hydrodynamic simulations, TELEMAC, Gironde Estuary, data assimilation, Ensemble Kalman filter, Karhunen-Loève decomposition, time-dependent forcings

 

How to cite: Laborie, V., Goutal, N., and Ricci, S.: Improving water levels forecast in the gironde estuary using data assimilation on a 2D numerical model : correction of time-dependent boundary conditions through a truncated karhunen-loève decomposition within an ensemble kalman filter, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17809, https://doi.org/10.5194/egusphere-egu2020-17809, 2020.

D116 |
EGU2020-4772
Uwe Ehret, Rik van Pruijssen, Marina Bortoli, Ralf Loritz, Elnaz Azmi, and Erwin Zehe

The structural properties of hydrological systems such as topography, soils or land use often show a considerable degree of spatial variability, and so do the drivers of systems dynamics, such as rainfall. Detailed statements about system states and responses therefore generally require spatially distributed and temporally highly resolved hydrological models. This comes at the price of substantial computational costs. However, even if hydrological sub systems potentially behave very differently, in practice we often find groups of these sub systems that behave similarly, but the number, size and characteristics of these groups varies in time. If we have knowledge of such clustered behavior of sub systems while running a model, we can increase computational efficiency by computing in full detail only a few representatives within each cluster, and assign results to the remaining cluster members. Thus, we avoid costly redundant computations. Unlike other methods designed to dynamically remove computational redundancies, such as adaptive gridding, dynamical clustering does not require spatial proximity of the model elements.

In our contribution, we present and discuss at the example of a distributed, conceptual hydrological model of the Attert basin in Luxembourg, i) a dimensionless approach to express dynamical similarity, ii) the temporal evolution of dynamical similarity in a 5-year period, iii) an approach to dynamically cluster and re-cluster model elements during run time based on an analysis of clustering stability, and iv) the effect of dynamical clustering with respect to computational gains and the associated losses of simulation quality.

For the Attert model, we found that there indeed exists high redundancy among model elements, that the degree of redundancy varies with time, and that the spatial patterns of similarity are mainly controlled by geology and precipitation. Compared to a standard, full-resolution model run used as a virtual reality ‘truth’, computation time could be reduced to one fourth, when modelling quality, expressed as Nash-Sutcliffe efficiency of discharge, was allowed decreasing from 1 to 0.84. Re-clustering occurred at irregular intervals mainly associated with the onset of precipitation, but on average the patterns of similarity were quite stable, such that during the entire six-year simulation period, only 165 re-clusterings were carried out, i.e. on average once every eleven days.

How to cite: Ehret, U., van Pruijssen, R., Bortoli, M., Loritz, R., Azmi, E., and Zehe, E.: Dynamical clustering: A new approach to make distributed (hydrological) modeling more efficient by dynamically detecting and removing redundant computations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4772, https://doi.org/10.5194/egusphere-egu2020-4772, 2020.

D117 |
EGU2020-14396
Maren Kaluza, Luis Samaniego, Stephan Thober, Robert Schweppe, Rohini Kumar, and Oldrich Rakovec

Parameter estimation of a global-scale, high-resolution hydrological model requires a powerful supercomputer and an optimized parallelization
algorithm. Improving the efficiency of such an implementation is essential to advance hydrological science and to minimize the uncertainty of
the major hydrologic fluxes and storages at continental and global scales. Within the ESM project [1], the main transfer-function parameters of the mHM
model will be estimated by jointly assimilating evapotranspiration (ET) from FLUXNET, the TWS anomaly from GRACE (NASA) and streamflow time series
from 5500 GRDC gauges to achieve this goal.

For the parallelization of the objective functions, a hybrid MPI-OpenMP scheme is implemented. While the parallelization
into equally sized subdomains for cell-wise computations  of fluxes (e.g., ET, TWS) is trivial,
cell-to-cell fluxes need to be computed for streamflow routing. For time series
datasets, the advanced parallelization algorithm MPI parallelized Decomposition of Forest (MDF) will be used. 

In this study, we go beyond the standard approach which decomposes the river into tributaries (e.g. the Pfaffenstetter System
[2]). We apply a non-trivial graph algorithm to decompose each river-network into a tree data structure with nodes representing
subbasin domains of almost equal size [3]. 

We analyze several aspects affecting the MDF parallelization: 
(1) the communication time between nodes; (2) buffering data before sending; (3) optimizing total node idle time and total run time; (4) memory
imbalance between master processes and other processes. 

We run the mHM model on the high-performance JUWELS supercomputer at Jülich Supercomputing Center (JSC) where the (routing) code efficiently scales up to ~180 nodes with 96 CPUs each. We discuss different parallelization aspects, 
including the effect of parameters onto the scaling of MDF and we show the benefits of MDF over a non-parallelized routing module.

[1] https://www.esm-project.net/
[2] http://proceedings.esri.com/library/userconf/proc01/professional/papers/pap1008/p1008.htm
[3] https://meetingorganizer.copernicus.org/EGU2019/EGU2019-8129-1.pdf

How to cite: Kaluza, M., Samaniego, L., Thober, S., Schweppe, R., Kumar, R., and Rakovec, O.: Massive Parallelization of the Global Hydrological Model mHM, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14396, https://doi.org/10.5194/egusphere-egu2020-14396, 2020.

D118 |
EGU2020-11730
Niels Drost, Rolf Hut, Nick Van De Giesen, Ben van Werkhoven, Jerom P.M. Aerts, Jaro Camphuijsen, Inti Pelupessy, Berend Weel, Stefan Verhoeven, Ronald van Haren, Eric Hutton, Fakhereh Alidoost, Gijs van den Oord, Yifat Dzigan, Bouwe Andela, and Peter Kalverla and the Model Contributors

The eWaterCycle platform is a fully Open-Source platform built specifically to advance the state of FAIR and Open Science in hydrological Modeling. eWaterCycle builds on web technology, notebooks and containers to offer an integrated modeling experimentation environment for scientists. It allows scientists to run any supported hydrological model with ease, including setup and pre-processing of all data required. Common datasets such as ERA-Interim and ERA-5 forcing data and observations for verification of model output quality are available for usage by the models, and a Jupyter based interface is available for ease of use.

As the main API for models, we use the Basic Model Interface (BMI). This allows us to support models in a multitude of languages. Our gRPC based system allows coupling of models, and running of multiple instances of the same model. Our system was designed to work with higher level interfaces such as PyMT, and we are currently integrating PyMT into our platform. During my talk I will give an overview of the different elements of the eWaterCycle platform.

The BMI interface was specifically designed to make it easy to implement in any given model. During the FAIR Hydrological Modeling workshop a number of modelers worked on creating a BMI interface for their models, and making them available in the eWaterCycle system. To show the amount of effort required in common cases, I will show the BMI interface that was created for a number of these models, including SUMMA, HYPE, Marrmot, TopoFlex, LisFlood, WFLOW, and PCR-GLOBWB.

How to cite: Drost, N., Hut, R., Van De Giesen, N., van Werkhoven, B., Aerts, J. P. M., Camphuijsen, J., Pelupessy, I., Weel, B., Verhoeven, S., van Haren, R., Hutton, E., Alidoost, F., van den Oord, G., Dzigan, Y., Andela, B., and Kalverla, P. and the Model Contributors: Coupling Hydrological models using BMI in eWaterCycle, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11730, https://doi.org/10.5194/egusphere-egu2020-11730, 2020.

D119 |
EGU2020-1778
Wenqi Wang, Dong Wang, Vijay P. Singh, and Yuankun Wang

Rainfall networks provide rainfall data needed for water resource management and decision-making. These data are especially important for runoff simulation and forecast when intense rainfall occurs in the flood season. Rainfall networks should, therefore, be carefully designed and evaluated. Information theory-based methods have lately received significant attention for rainfall network design. This study focuses on the integrated design of a rainfall network, especially for streamflow simulation. We proposed a multi-objective rainfall network design method based on information theory and applied it to the Wei River basin in China. The rainfall network design can be viewed as the input for a rainfall-runoff model, as it was intended to consider streamflow data at the outlet hydrometric station. We use the total correlation as an indicator of information redundancy and multivariate transinformation as an indicator of information transfer. Information redundancy refers to the overlapped information between rainfall stations, and information transfer refers to the rainfall-runoff relationship. The outlet hydrometric station (Huaxian station in the Wei River basin) is used as the target station for the streamflow simulation. A non-dominated sorting genetic algorithm (NSGA-II) was used for the multi-objective optimization of the rainfall network design. We compared the proposed multi-objective design with two other methods using an artificial neural network (ANN) model. The optimized rainfall network from the proposed method led to reasonable outlet streamflow forecasts with a balance between network efficiency and streamflow simulation. Our results indicate that the multi-objective strategy provides an effective design by which the rainfall network can consider the rainfall-runoff process and benefit streamflow prediction on a catchment scale.

How to cite: Wang, W., Wang, D., Singh, V. P., and Wang, Y.: Multi-objective design of rainfall network based on information theory for streamflow simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1778, https://doi.org/10.5194/egusphere-egu2020-1778, 2020.

D120 |
EGU2020-9815
| solicited
Maria C. Cunha and João Marques

Multiobjective water distribution networks (WDNs) are a very lively area of research (Marques et al., 2018). To evaluate the performance of these algorithms, different metrics can be used to quantify and compare the quality of the solutions during the run-time and at the end-time of the optimization process. The quality evaluation of the set of non-dominated solutions found by these algorithms is not a trivial process. The literature review by Audet et al. (2018) includes 57 distinct performance indicators that can be used to evaluate solutions provided by multiobjective algorithms, and groups these indicators into four categories: cardinality, convergence, distribution and spread. These categories aim at characterizing, respectively, the number of solutions provided by each algorithm, the approximation of the solutions to the best-known front, the distribution of solutions along the front and the range of the set of solutions found.  To evaluate a multiobjective algorithm, performance indicators that cover all these four categories should be considered to prevent any kind of misleading conclusions. The authors have recently proposed a new multiobjective simulated annealing algorithm. It is an enhanced version of the algorithm presented in (Marques et al., 2018) in that it uses special features to generate candidate solutions and a final step that involves a local search. Different generation processes guide the search and allow the algorithm to reach some parts of the Pareto front that would not be possible if a single generation process was used. The local search, a reannealing phase, is implemented as a supplemental phase of the algorithm to concentrate the search in specific areas of the front to identify the best possible solutions. The present work proposes to evaluate the performance of this algorithm by means of performance indicators of different categories, computed for a set of different benchmark WDNs presented in Wang et al (2015). From the results it can be concluded that the proposed algorithm achieves higher quality solutions than other algorithms, and does so without increasing the computational effort. The results found are evaluated with performance metrics from the four categories.

 

Acknowledgments

This work is partially supported by the Portuguese Foundation for Science and Technology under project grant UIDB/00308/2020.

 

References

Audet, C., Bigeon, J., Cartier, D., and Le, S. (2018). Performance indicators in multiobjective optimization. European journal of operational research, 1–39.

Marques, J.,  Cunha,  M. and Savić, D. (2018). Many-objective optimization model for the flexible design of water distribution networks. Jounal Environmental Management, 226, 308–319.

Wang, Q., Guidolin, M., Savić, D., and Kapelan, Z. (2015). Two-Objective Design of Benchmark Problems of a Water Distribution System via MOEAs: Towards the Best-Known Approximation of the True Pareto Front. Journal of Water Resources Planning and Management, 141(3), 04014060.

How to cite: Cunha, M. C. and Marques, J.: Performance evaluation of a multiobjective optimization algorithm for the design of water distribution networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9815, https://doi.org/10.5194/egusphere-egu2020-9815, 2020.

D121 |
EGU2020-15217
Biswa Bhattacharya and Junaid Ahmad

Satellite based rainfall estimates (SBRE) are used as an alternative to gauge rainfall in hydrological studies particularly for basins with data issues. However, these data products exhibit errors which cannot be always corrected by bias correction methods such as Ratio Bias Correction (RBC). Data fusion or data merging can be a potentially good approach in merging various SBREs to obtain a fused dataset, which can benefit from all the data sources and may minimise the error in rainfall estimates. Data merging methods which are commonly applied in meteorology and hydrology are: Arithmetic merging method (AMM), Inverse error squared weighting (IESW) and Error variance (EV). Among these methods EV is popular, which merges bias corrected SBREs using the minimisation of variance principle.

In this research we propose using K Nearest Neighbour (KNN) as a data merging method. KNN has a particular advantage as it does not depend upon any specific statistical model to merge data and presents a great flexibility as the value of K (the number of neighbours to be chosen) can be varied to suit the purpose (for example, choosing different K values for different seasons). In this research it is proposed to compute the distances of bias corrected SBREs of the training data from the gauge data and to assign the SBRE with the minimum distance as the class C where C = 1, 2, 3,…, number of SBREs. In validation each data point consisting of a value of each SBRE may be compared with the data points from the training set and the class of the data point(s) closest to this data point is assigned as the class of the validation data point.

The KNN approach as a data merging method was applied to the Indus basin in Pakistan. Three satellite rainfall products CMORPH, PERSIANN CDR and TRMM 3B42 with 0.25° x 0.25° spatial and daily temporal resolution were used. Based on the climatic and physiographic features the Indus basin was divided into four zones. Rainfall products were compared at daily, weekly, fortnightly, monthly and seasonally whereas spatial scales were gauge location, zonal scales and basin scale. The RBC method was used to correct the bias. The KNN method with K=1, 3 and 5 was used and compared with other merging methods namely AMM, IESW and EV. The results were compared in two seasons i.e. non-wet and wet season. AMM and EV methods performed similarly whereas IESW performed poorly at zonal scales. KNN merging method outperformed all other merging methods and gave lowest error across the basin. The daily normalised root mean square error at the Indus basin scale was reduced to 0.3, 0.45 and 0.45 respectively with KNN, AMM and EV whereas this error was 0.8, 0.65 and 0.53 respectively in CMORPH, PERSIANN CDR and TRMM datasets. The KNN merged product gave lowest error at daily scale in calibration and validation period which justifies that merging with KNN improves rainfall estimates in sparsely gauged basins.

 

Key words: Merging, data fusion, K nearest neighbour, KNN, error variance, Indus.

How to cite: Bhattacharya, B. and Ahmad, J.: Merging of satellite rainfall estimates from diverse sources with K nearest neighbour in sparsely gauged basins, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15217, https://doi.org/10.5194/egusphere-egu2020-15217, 2020.

D122 |
EGU2020-15223
Daniel Hawtree, John O'Sullivan, Gregory O'Hare, Levent Görgü, Conor Muldoon, Wim G. Meijer, Bartholomew Masterson, Aurora Gitto, Malcolm Taylor, and Elaine Mitchell

The European Bathing Water Directive (BWD; 76/160/EEC 2006) requires the implementation of early warning systems for bathing waters which are subject to short-term pollution events. To this end, the EU SWIM project is developing coastal water quality prediction models and alert systems at nine beach sites in the Republic of Ireland and Northern Ireland, which represent a range of baseline water quality and site conditions.

At each site, statistical / machine-learning predictive models are being developed based on their site-specific relationships between fecal indicator bacteria and multiple environmental variables. A unique aspect of the approach being developed is the use of a historical back-cast climate data (Met Éireann's MÉRA dataset) as the foundation of model development, and the use of a related climate forecast dataset (Met Éireann's Harmonie dataset) for forecasts. By integrating these datasets into a predictive system, environmental variables can be utilized at spatial and temporal resolutions exceeding what is typically available from alternative data sources (e.g. weather station gauges). This approach enables the production of a continuous stream of short-term water quality forecasts, which can then be validated against data collected by routine compliance sampling, as well as targeted supplementary water quality sampling.

This presentation provides an overview of the end-to-end prediction system, a summary of the underlying models, and a discussion of the challenges and opportunities presented by this forecasting framework.

How to cite: Hawtree, D., O'Sullivan, J., O'Hare, G., Görgü, L., Muldoon, C., Meijer, W. G., Masterson, B., Gitto, A., Taylor, M., and Mitchell, E.: The Development of a Water Quality Forecasting System for Recreational Coastal Bathing Waters in Ireland , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15223, https://doi.org/10.5194/egusphere-egu2020-15223, 2020.

D123 |
EGU2020-4243
Paul Munoz, Johanna Orellana-Alvear, Jörg Bendix, and Rolando Célleri

Flood Early Warning Systems have globally become an effective tool to mitigate the adverse effects of this natural hazard on society, economy and environment. A novel approach for such systems is to actually forecast flood events rather than merely monitoring the catchment hydrograph evolution on its way to an inundation site. A wide variety of modelling approaches, from fully-physical to data-driven, have been developed depending on the availability of information describing intrinsic catchment characteristics. However, during last decades, the use of Machine Learning techniques has remarkably gained popularity due to its power to forecast floods at a minimum of demanded data and computational cost. Here, we selected the algorithms most commonly employed for flood prediction (K-nearest Neighbors, Logistic Regression, Random Forest, Naïve Bayes and Neural Networks), and used them in a precipitation-runoff classification problem aimed to forecast the inundation state of a river at a decisive control station. These are “No-alert”, “Pre-alert”, and “Alert” of inundation with varying lead times of 1, 4, 8 and 12 hours. The study site is a 300-km2 catchment in the tropical Andes draining to Cuenca, the third most populated city of Ecuador. Cuenca is susceptible to annual floods, and thus, the generated alerts will be used by local authorities to inform the population on upcoming flood risks. For an integral comparison between forecasting models, we propose a scheme relying on the F1-score, the Geometric mean and the Log-loss score to account for the resulting data imbalance and the multiclass classification problem. Furthermore, we used the Chi-Squared test to ensure that differences in model results were due to the algorithm applied and not due to statistical chance. We reveal that the most effective model according to the F1-score is using the Neural Networks technique (0.78, 0.62, 0.51 and 0.46 for the test subsets of the 1, 4, 8 and 12-hour forecasting scenarios, respectively), followed by the Logistic Regression algorithm. For the remaining algorithms, we found F1-score differences between the best and the worse model inversely proportional to the lead time (i.e., differences between models were more pronounced for shorter lead times). Moreover, the Geometric mean and the Log-log score showed similar patterns of degradation of the forecast ability with lead time for all algorithms. The overall higher scores found for the Neural Networks technique suggest this algorithm as the engine for the best forecasting Early Warning Systems of the city. For future research, we recommend further analyses on the effect of input data composition and on the architecture of the algorithm for full exploitation of its capacity, which would lead to an improvement of model performance and an extension of the lead time. The usability and effectiveness of the developed systems will depend, however, on the speed of communication to the public after an inundation signal is indicated. We suggest to complement our systems with a website and/or mobile application as a tool to boost the preparedness against floods for both decision makers and the public.

Keywords: Flood; forecasting; Early Warning; Machine Learning; Tropical Andes; Ecuador.

How to cite: Munoz, P., Orellana-Alvear, J., Bendix, J., and Célleri, R.: Comparison of Machine Learning Techniques Powering Flood Early Warning Systems. Application to a catchment located in the Tropical Andes of Ecuador., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4243, https://doi.org/10.5194/egusphere-egu2020-4243, 2020.

D124 |
EGU2020-22273
Wei Lu and Christine Shoemaker

The increasing climatic extremes and urbanization have led to escalated frequency of extreme rainfall events and also higher risk of urban floods. Water Retention Measures (WRMs) are proposed as one of the countermeasures for controlling urban flood risks. WRMs involve a series of decentralized stormwater management facilities, such as bio-retention cell (BC) and green roof (GR). Simulation-optimization approaches are developed by combing hydrological models and optimization algorithms for identifying cost-effective layouts of WRMs. Traditional evolutionary algorithms (e.g. genetic algorithm, GA) are generally time-consuming for computationally expensive simulation-optimization problems and are difficult to reach the global optimum in high-dimensional decision spaces. On the other hand, rainfall plays a key role among various climate inputs in driving hydrological models, and uncertainties associated with rainfall characteristics (e.g. rainfall depth and temporal pattern) would have a great impact on the reliability of the simulation-optimization results.

Through a case study, we propose a robust surrogate-based simulation-optimization scheme for designing the layout of two types of WRMs (i.e. GR and BC) under rainfall uncertainties. Those WRMs are embedded in a hydrological model (i.e. Storm water management model, SWMM). The objective is to maximize the reduction of flood damage costs with a limited budget for WRMs. Design rainfalls are developed on the basis of local IDF curve and 30-year length of daily rainfall records, with various depths and patterns considered for driving the SWMM model, which make each WRM simulation expensive (i.e. around 4 mins). For solving this expensive global optimization problem, we adopted an improved surrogate global optimization algorithm namely DYnamic COordinate search using Response Surface models (DYCORS), where the surrogate is designed to reduce the number of expensive function evaluations. With the budget for WRM simulations (i.e. function evaluations) capped at 500, DYCORS manages to find a good optimal solution in 32 hours of CPU run time. It was shown that when uncertainty inputs (like rainfall) increase the complexity and computational cost of the hydrological simulation-optimization problem, the proposed scheme becomes a promising way to support urban water managers for a more science-based WRM design towards flood risk mitigation.

How to cite: Lu, W. and Shoemaker, C.: Surrogate-based Simulation-Optimization Scheme for Designing Water Retention Measures under Rainfall Uncertainty, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22273, https://doi.org/10.5194/egusphere-egu2020-22273, 2020.

D125 |
EGU2020-9834
Jonas Sukys and Marco Bacci
SPUX (Scalable Package for Uncertainty Quantification in "X") is a modular framework for Bayesian inference and uncertainty quantification. The SPUX framework aims at harnessing high performance scientific computing to tackle complex aquatic dynamical systems rich in intrinsic uncertainties,
such as ecological ecosystems, hydrological catchments, lake dynamics, subsurface flows, urban floods, etc. The challenging task of quantifying input, output and/or parameter uncertainties in such stochastic models is tackled using Bayesian inference techniques, where numerical sampling and filtering algorithms assimilate prior expert knowledge and available experimental data. The SPUX framework greatly simplifies uncertainty quantification for realistic computationally costly models and provides an accessible, modular, portable, scalable, interpretable and reproducible scientific workflow. To achieve this, SPUX can be coupled to any serial or parallel model written in any programming language (e.g. Python, R, C/C++, Fortran, Java), can be installed either on a laptop or on a parallel cluster, and has built-in support for automatic reports, including algorithmic and computational performance metrics. I will present key SPUX concepts using a simple random walk example, and showcase recent realistic applications for catchment and lake models. In particular, uncertainties in model parameters, meteorological inputs, and data observation processes are inferred by assimilating available in-situ and remotely sensed datasets.

How to cite: Sukys, J. and Bacci, M.: SPUX - a Scalable Package for Bayesian Uncertainty Quantification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9834, https://doi.org/10.5194/egusphere-egu2020-9834, 2020.

D126 |
EGU2020-8596
Valeriya Fillipova, David Leedal, and Anthony Hammond

We have recently demonstrated the utility of a machine learning-based regional peak flow quantile regression model that is currently providing flood frequency estimation for the re/insurance industry across the contiguous US river network. The scheme uses an artificial neural network (ANN) regression model to estimate flood frequency quantiles from physical catchment descriptors. This circumvents the difficult-to-justify assumption of homogeneity required by alternative ‘region of hydrological similarity’ approaches. The structure of the model is as follows: the output (dependent) variable is a set of peak flow quantiles where the distributions used to derive the quantiles were parameterised from observations at 4,079 gauge sites using the USGS Bulletin 17C extreme value estimation method (notable for its inclusion of pre-instrumental flood events). The features (regressors) for the model were formed from 25 catchment descriptors covering; geometry, elevation, land cover, soil type and climate type for both the gauged sites and the catchments related to a further 906,000 ungauged sites where peak flow quantile estimation was undertaken. The feature collection requires massive computational resource to achieve catchment delineation and GIS processing of land-use, soil-type and precipitation data.

This project integrates many modelling and computational science elements. Here we focus attention on the ANN modelling component as this is of interest to the wider hydrology research community. We pass on our experience of working with this modelling approach and the unique challenges of working on a problem of this scale.

A baseline multiple linear regression model was generated, as were several non-linear alternative formulations. The ANN model was chosen as the best approach according to a root mean square error (RMSE) criterion. Alternative ANN formulations were evaluated. The RMSE indicated that a single hidden layer performed better than more complex multiple hidden layer models. Variable importance algorithms were used to assess the mechanistic credibility of the ANN model and showed that catchment area and mean annual rainfall were consistently identified as dominant features in agreement with the expectations of domain experts together with more subtle region-specific factors.

The results of this study show that ANN models, used as part of a carefully configured large-scale  computational hydrology project, produce very useful regional flood frequency estimates that can be used to inform flood risk management decision-making or drive further hydrodynamic 2D-modelling and are appropriate to the ever-increasing scale of contemporary hydrological modelling problems.

How to cite: Fillipova, V., Leedal, D., and Hammond, A.: Regional flood frequency estimation for the contiguous USA using Artificial Neural Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8596, https://doi.org/10.5194/egusphere-egu2020-8596, 2020.

D127 |
EGU2020-12221
Yue-Ping Xu, Haiting Gu, and Ma Di

Distributed hydrologic models have been widely used for its functional diversity and rationality in theory. However, calibration of distributed models is computationally expensive with a large number of model runs, even if an efficient multi-objective algorithm is employed. To alleviate the burden of computation, we develop a two-stage surrogate model by coupling backpropagation neural network with AdaBoost to calibrate the parameters of the Variable Infiltration Capacity (VIC) model. The first stage model selects the parameter sets with simulated outputs in the crucial range and the second stage model estimates the values of outputs accurately with the parameter sets picked out by the first stage model. The developed surrogate model is tested in three different river basins in China, namely the Lanjiang River basin (LJR), the Xiangjiang River basin (XJR) and the Upper Brahmaputra River basin (UBR). With sufficient samples generated by ε-NSGA II, the surrogate model performs very well with a low error rate of classification (ER) and root mean square error (RMSE). The streamflow simulated with the surrogate model is nearly the same as that from the original VIC model, indicating that the surrogate model does gain a remarkable speedup compared with the original VIC model.

How to cite: Xu, Y.-P., Gu, H., and Di, M.: A two-stage surrogate model based on ANN and AdaBoost for multi-objective parameter optimization of the Variable Infiltration Capacity model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12221, https://doi.org/10.5194/egusphere-egu2020-12221, 2020.

D128 |
EGU2020-9398
Paul Celicourt, Silvio J. Gumiere, and Alain Rousseau

Hydroinformatics, throughout its more than 25 years of existence, has been applied to a set of research areas. So far, these applications include: hydraulics and hydrology, environmental science and technology, knowledge systems and knowledge management, urban water systems management.

This paper introduces agricultural water systems management as a new application for hydroinformatics, and terms it as “agricultural hydroinformatics”. It presents a discipline-delineated conceptual framework originating from the particularities of the socio-technical dimension of applying hydroinformatics in agriculture. It epitomizes the wholeness and inter-dependencies of agricultural systems studies and modelling. It is suitable to support, not only integrated agricultural water resources management in particular, but also agricultural sustainability in general, in addition to a wide range of agricultural development situations beyond connections between agro-economic and water engineering development and its socio-economic impacts.

The paper also highlights some contributions of hydroinformatics to agriculture including new kinds of sensing technologies, information and simulation models development that bear the potential to boost reproducibility of agricultural systems research through systematic and formal records of the relationships among raw data, the processes that produce results and the results themselves.

How to cite: Celicourt, P., Gumiere, S. J., and Rousseau, A.: Agricultural hydroinformatics: agricultural water systems management as a new application for hydroinformatics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9398, https://doi.org/10.5194/egusphere-egu2020-9398, 2020.

D129 |
EGU2020-21079
Abdou Khouakhi, Ian Pattison, Jesús López-de la Cruz, Oliver Mendoza-Cano, Robert Edwards, Raul Aquino, Paul Lepper, Victor Rangel, Jose Ibarreche, Ismael Perez, John Davis, Ben Clark, and Miguel Martínez

Urban flooding is one of the major issues in many parts of the world and its management often challenging. Here we present Internet of Things (IoT) approach for monitoring urban flooding in the City of Colima, Mexico. A network of water level and weather sensors have been developed along with a web-based data platform integrated with IoT techniques to retrieve data using 3G/4G and Wi-Fi networks. The developed architecture uses the Message Queuing Telemetry Transport protocol to send real-time data packages from fixed nodes to a server that stores retrieved data in a non-relational database. Data can be accessed and displayed through different queries and graphical representations, allowing future use in flood analysis and prediction. Additionally, machine learning algorithms are integrated into the system for short-range water level predictions at different nodes of the network.

How to cite: Khouakhi, A., Pattison, I., López-de la Cruz, J., Mendoza-Cano, O., Edwards, R., Aquino, R., Lepper, P., Rangel, V., Ibarreche, J., Perez, I., Davis, J., Clark, B., and Martínez, M.: An internet of things system for urban flood monitoring and short-term flood forecasting in Colima, Mexico, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21079, https://doi.org/10.5194/egusphere-egu2020-21079, 2020.

D130 |
EGU2020-20022
George Karavokiros, Dionysios Nikolopoulos, Stavroula Manouri, Andreas Efstratiadis, Christos Makropoulos, Nikos Mamassis, and Demetris Koutsoyiannis

Over the last 30 years, numerous water resources planning and management studies in Greece have been conducted by using state-of-the-art methodologies and associated computational tools that have been developed by the Itia research team at the National Technical University of Athens. The spearhead of Itia’s research toolkit has been the Hydronomeas decision support system (which stands for “water distributer” in Greek) supporting multi-reservoir hydrosystem management. Its methodological framework has been based on the parameterization-simulation-optimization approach comprising stochastic simulation, network linear optimization for the representation of water and energy fluxes, and multicriteria global optimization, ensuring best-compromise decision-making. In its early stage, Hydronomeas was implemented in Object Pascal – Delphi. Currently, the software is being substantially redeveloped and its improved version incorporates new functionalities, several model novelties and interconnection with other programs, e.g., EPANET. Hydronomeas 2020 will be available at the end of 2020 as a free and open-source Python package. In this work we present the key methodological advances and improved features of the current version of the software, demonstrated in the modelling of the extensive and challenging raw water supply system of the city of Athens, Greece.

How to cite: Karavokiros, G., Nikolopoulos, D., Manouri, S., Efstratiadis, A., Makropoulos, C., Mamassis, N., and Koutsoyiannis, D.: Hydronomeas 2020: Open-source decision support system for water resources management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20022, https://doi.org/10.5194/egusphere-egu2020-20022, 2020.

D131 |
EGU2020-13004
Gergely Ámon

Baseflow and flash flood models of the ungaged Morgó watershed

Gergely Amon1 and Katalin Bene2

1Department of Transport Infrastructure and Water Resources Engineering, Széchenyi István University, Győr, Hungary, amon.gergely@sze.hu

2Department of Transport Infrastructure and Water Resources Engineering, Széchenyi István University, Győr, Hungary, benekati@sze.hu

 

Abstract: A common feature of steep-sloping watersheds is that there is a significant difference between base flow and flash floods; sometimes two or three orders of magnitude. In Hungary, these streams are usually ungaged or the available flow data is very limited. The Morgó creek watershed, located in northern part of Hungary, features steep terrain, and both urban and natural land use conditions.

In this paper, different models are applied to evaluate flash floods, and baseflow conditions in the Morgó-creek watershed. High probability baseflows can help to evaluate and monitor the current and future condition and health of the local ecological systems. Modeling flash floods with low probability can help to assess and prevent damage in urban areas.

Different types of models are required to generate baseflow and flash flood scenarios. For baseflow modelling, a two-dimensional finite element method was used while for flash floods, a finite volume model was applied. Morgó creek has a high peak flow, with a sharply increasing rising limb. As a result, the finite volume model is not sensitive to mesh density. Additionally, the impact of roughness coefficient was less than expected during calibration. The low flow analysis requires a more complex model to account for turbulence; therefore, the Shallow Water equations were used in the finite element model.

Uncertainty in hydrological model parametrization are a source of significant prediction errors. Monte Carlo simulation was applied to quantify parameter uncertainty on watershed response. The analysis was then used in the hydrodynamic model to assess the final prediction error for baseflow and flash flood conditions. While the hydrodynamic baseflow and flash flood models have different space and time scales, the two model solutions do influence each other. Proper analysis and comparison of the selected scenarios can help to determine an optimal design for the Morgó-creek watershed.

This work was undertaken as part of a project funded by the EFOP-3.6.1-16-2016-00017.

How to cite: Ámon, G.: Baseflow and flash flood models of the ungaged Morgó watershed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13004, https://doi.org/10.5194/egusphere-egu2020-13004, 2020.

D132 |
EGU2020-10319
Bryan A. Tolson and Juliane Mai and the GRIP-E/GL Project Team

The Great Lakes Runoff Inter-comparison Project (GRIP) includes a wide range of lumped and distributed models that are used operationally and/or for research purposes across Canada and the United States. Participating models are GEM-Hydro, WRF-Hydro, MESH, VIC, WATFLOOD, SWAT, mHM, Noah-MP, HYPE, LBRM, GR4J, HMETS, and purely statistical models. The latter are added to assess the information content of the forcing and geophysical datasets. As part of the Integrated Modelling Program for Canada (IMPC) under the Global Water Futures (GWF) program, the project is aiming to run all these models over several regions in Canada. We started with the Lake Erie watershed and then extended the study to the whole Great Lakes domain.

One of the main contributions of the project is that we identified a standard dataset for model building that all participants in the inter-comparison project can access and then process to generate their model-specific required inputs. The common dataset allows identifying differences in model outputs that are solely due to the models and not the data used to setup the models. This presentation will give an update on the design of the inter-comparison and will report on comparative results for two sets of streamflow gauging stations: A) gauge stations with low-human impact upstream watersheds and B) most down-stream gauge stations directly draining into the lake(s).

The main results are: 1) The best performing semi-distributed model calibrated across all stations at once is HYPE. The mHM is the best distributed model calibrated at each station individually (median NSE = 0.78) while LBRM is the lumped model that is on average the best (median NSE = 0.66). 2) The purely statistical model is highly competitive with and even slightly outperforming all hydrologic models except mHM in the calibration period. 3) The performance of most models decreases in urbanized areas. Only models that are calibrated independently at each station are capable of modelling urbanized areas. 4) No significant change in performance can be observed between low-human impact watersheds and watersheds that are mostly downstream, draining directly into a Great Lake.

How to cite: Tolson, B. A. and Mai, J. and the GRIP-E/GL Project Team: The Runoff Model-Intercomparison Project over Lake Erie and the Great Lakes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10319, https://doi.org/10.5194/egusphere-egu2020-10319, 2020.

D133 |
EGU2020-15671
Milan Cisty, Veronika Soldanova, and Frantisek Cyprich

Irrigation reservoirs are used to retain water during periods of surplus and to control its subsequent use for irrigation in drought periods. While designing a reservoir, it is essential to evaluate its function and assess its ability to provide the required amount of water for irrigation. It means an evaluation of the reservoir for the quantitative balance of water. The input data used in such computations include the water inflow into the reservoir, demand for water abstraction from the reservoir, data on the required outflow of water below the reservoir, and the evaporation and other losses of water from the reservoir. Smaller streams often supply irrigation reservoirs at the margins of river catchments. It is crucial, from the point of view of this work, that such smaller streams often do not have systematic measurements of their flow. Therefore determination of this quantity is often the main problem of water balance evaluation. This work proposes a method for the acquisition of such data. While identifying unknown stream flows required for such a calculation, authors suppose that historical climatic data for the given area and flows in some of the nearby river catchments are available (measured). Description of the method of selecting river catchments such that their measured flows can be used in the calculation of an unknown flow of a different stream will be presented. A case study from the Small Carpathians in Western Slovakia is reported in the presentation. This study compares the conceptual hydrologic model, linear regression with LASSO regularization, and various machine learning methods (CATboost, Random Forest, Support Vector Machines). Authors will evaluate the precision of flows determination by various statistical indicators.

Acknowledgements. This work was supported by the Slovak Research and Development Agency under Contract No. APVV-15-0489 and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, Grant No. 1/0662/19.

How to cite: Cisty, M., Soldanova, V., and Cyprich, F.: Unmeasured inflows determination in the context of the assessment of the water balance of irrigation reservoir, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15671, https://doi.org/10.5194/egusphere-egu2020-15671, 2020.

Chat time: Tuesday, 5 May 2020, 10:45–12:30

Chairperson: Dawei Han
D134 |
EGU2020-12796
Yanhua Qin, Xun Sun, and Baofu Li

To quantitatively evaluate the impacts of climate variability and human activities on runoff at different time scales is a challenging task. In this study, a nonlinear hybrid model integrating extreme-point symmetric mode decomposition, back propagation artificial neural networks and weights connection method based on the physical nonlinear relationship between impact factors and runoff were developed to explore an approach for solving this problem. To validate the applicability of the nonlinear hybrid model, the Hotan River was employed to assess the impacts of climate variability and human activities on runoff. Results illustrated that a good performance was presented by this model. The contribution of the upper-air temperature at 500 hPa was the highest (70.5%), which is the most important factor for runoff change. At different time scales, this factor also has the highest contributions. However, the water vapor content was responsible for 22.0% of the runoff change. Furthermore, the human activities were only accounted for 7.5%, indicating that runoff in the Hotan River is more sensitive to climate variability than human activities.

How to cite: Qin, Y., Sun, X., and Li, B.: Quantitatively assessing the impacts of climate variability and human activities on runoff, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12796, https://doi.org/10.5194/egusphere-egu2020-12796, 2020.

D135 |
EGU2020-845
Everett Snieder, Arash Shahmansouri, Chi-Han Cheng, Yuanhao Ding, Edward Graham, and Usman Khan

The Storm Water Management Model (SWMM) is a popular and widely used physics-based numerical model for water resource management and flow forecasting. Calibrating SWMM requires a large amount of geospatial and hydro-meteorological data that may be hard to collect, has high uncertainty associated with it, and are often non-stationary. These issues are compounded when modelling large watersheds with several sub-catchments, leading to thousands of parameters that need to be calibrated collectively. The calibration process is time consuming (and often conducted manually), and results in models that are biased, only tuned to specific events, and lead to high uncertainty in the flow forecasts, and thus, limiting their utility.

In this research, a two-stage machine-learning process is proposed to first calibrate a large-scale SWMM model using a genetic algorithm (GA), and second, to bias correct the flow forecast values using an  artificial neural network (ANN) ensemble to improve real-time flow forecasts.

A SWMM model for the 14 Mile Creek Watershed in Ontario, Canada is used as a case study for the proposed method. The model contains 60 sub-catchments with 10 parameters each, and a total of 1144 elements that require calibration. The model is driven by a suite of numerical weather models and precipitation estimates (including the Global Environmental Multiscale - Local Area Model, the North American Mesoscale Forecast System, and the Rapid Refresh and High Resolution Rapid Refresh models). These models have a lead time of up to 36 hours at an hourly resolution. A GA approach was implemented in MATLAB to calibrate the watershed for both single- and multi-event scenarios using a multi-criteria optimisation approach for a suite of model performance metrics (the Nash-Sutcliffe Efficiency, peak flow difference, and relative error of the total runoff volume). Historical precipitation and flow data with an hourly time-step was used in the calibration procedure.

Next, an ANN is trained using recent (i.e., 1 to 24 hour lag time) observed flow, SWMM forecast flow, and observed precipitation, to predict the SWMM bias (the difference between SWMM forecasts and flow observations). The estimated bias is used to correct the real-time SWMM forecasts which are driven by the precipitation forecasts. This bias correction procedure implicitly minimizes the collective error associated with the radar forecasts, the SWMM parameter uncertainty, and the SWMM epistemic uncertainty. Ensemble methods are employed within the ANN to quantify the uncertainty of the bias-corrected forecast flows.

Preliminary results indicate that GA-based calibration improved the NSE from 0 to 0.75; however, some single event-based GA calibration did not maintain acceptable performance (NSE > 0.65) when cross-validated with other events. Bias corrected forecasts further improve the NSE to 0.9 for some events. A comparison between the uncalibrated, GA-calibrated, bias-corrected, and pure ANN forecasts are presented.

How to cite: Snieder, E., Shahmansouri, A., Cheng, C.-H., Ding, Y., Graham, E., and Khan, U.: Improved real-time SWMM flow forecasts using two machine learning approaches, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-845, https://doi.org/10.5194/egusphere-egu2020-845, 2020.

D136 |
EGU2020-12690
Cheng-Wei Li, Sheng-Hsueh Yang, Wen-Feng Wu, and Keh-Chia Yeh

Disaster prevention IoT monitoring technology can be used to solve some problems in urban disaster prevention. For example, in the past, urban areas often experienced extreme regional rainstorms, which caused flooding, traffic chaos, and emergency response time and insufficient support for disaster prevention personnel. Especially during commuting hours, the government has difficulty in flooding and traffic management. This research is to use the disaster prevention Internet of Things monitoring technology to investigate the causes of flooding in urban flood-prone areas, monitor network planning, and install monitoring equipment. Through the storm sewer system monitoring network, set the warning water level value in the sewer system, transmit the water level information in real-time, and determine whether the system downstream pump station can be started to pump in advance to reduce the water level of the storm sewer system and the occurrence of flooding. In areas where there is no sewer system, the pavement flooding sensor is installed to monitor the flooding situation on the land surface. When the land surface is flooded, it is necessary to add regional forecast rainfall information to determine whether it will affect regional traffic. Traffic instructions for no-entry areas. Other real-time information about rivers, regional drainage water level stations, and rainfall stations are the basis for decision-making. Finally, urban storm sewer monitoring and management platforms are built to provide real-time information and a grasp of possible disasters. Take New Taipei City, Taiwan as an example to carry out research on the integration of water conservancy information.

How to cite: Li, C.-W., Yang, S.-H., Wu, W.-F., and Yeh, K.-C.: Research on the Integration of Urban Flood Control Monitoring and Management Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12690, https://doi.org/10.5194/egusphere-egu2020-12690, 2020.

D137 |
EGU2020-12523
Li-Chiu Chang and Fi-John Chang

In the face of increasingly flood disasters, on-line regional flood inundation forecasting in urban areas is vital for city flood management, while it remains a significant challenge because of the complex interactions and disruptions associated with highly uncertain hydro-meteorological variables and the lack of high-resolution hydro-geomorphological data. Effective on-line flood forecasting models through the rapid dissemination of inundation information regarding threatened areas deserve to develop appropriate technologies for early warning and disaster prevention. Artificial Intelligence (AI) becomes one of the popular techniques in the study of flood forecasts in the last decades. We apply the AI techniques with the newly implemented IoT-based real-time monitoring flood depth data to build an urban AI flood warning system. The AI system integrates the self-organizing feature mapping networks (SOM) with the recurrent nonlinear autoregressive with exogenous inputs network (R-NARX) for modelling the regional flooding prediction. The proposed AI model with the IoT-based real-time monitoring flood depth datasets can increase the value-added application of diversified disaster prevention information and improve the accuracy of flood forecasting. We develop an on-line correction algorithm for continuously learning and correcting model’s parameters, automatic operation modules, forecast results output modules, and web page display interface. The proposed AI system can provide the smart early flooding warnings in the urban area and help the Water Resources Agency to promote the intelligent water disaster prevention services.

Keywords:

Artificial Intelligence (AI); Artificial Neural Networks (ANN); Internet of Things (IoT); Regional flood inundation forecast; Spatial-temporal distribution

How to cite: Chang, L.-C. and Chang, F.-J.: IoT-based Flood Depth Sensors in Artificial Intelligent Urban Flood Warning Systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12523, https://doi.org/10.5194/egusphere-egu2020-12523, 2020.

D138 |
EGU2020-7756
Lu Zhuo, Qiang Dai, and Dawei Han

Soil moisture plays an important role in the partitioning of rainfall into evapotranspiration, infiltration and runoff, hence a vital state variable in the hydrological modelling. However, due to the heterogeneity of soil moisture in space most existing in-situ observation networks rarely provide sufficient coverage to capture the catchment-scale soil moisture variations. Clearly, there is a need to develop a systematic approach for soil moisture network design, so that with the minimal number of sensors the catchment spatial soil moisture information could be captured accurately. In this study, a simple and low-data requirement method is proposed. It is based on the Principal Component Analysis (PCA) and Elbow curve for the determination of the optimal number of soil moisture sensors; and K-means Cluster Analysis (CA) and a selection of statistical criteria for the identification of the sensor placements. Furthermore, the long-term (10-year) soil moisture datasets estimated through the advanced Weather Research and Forecasting (WRF) model are used as the network design inputs. In the case of the Emilia Romagna catchment, the results show the proposed network is very efficient in estimating the catchment-scale soil moisture (i.e., with NSE and r at 0.995 and 0.999, respectively for the areal mean estimation; and 0.973 and 0.990, respectively for the areal standard deviation estimation). To retain 90% variance, a total of 50 sensors in a 22,124 km2 catchment is needed, which in comparison with the original number of WRF grids (828 grids), the designed network requires significantly fewer sensors. However, refinements and investigations are needed to further improve the design scheme which are also discussed in the paper.

How to cite: Zhuo, L., Dai, Q., and Han, D.: Soil Moisture Network Design using Advanced Numerical Weather Prediction modelling and Data Mining technology for Hydrological applications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7756, https://doi.org/10.5194/egusphere-egu2020-7756, 2020.

D139 |
EGU2020-12303
Vladan Babovic, Jayashree Chadalawada, and Herath Mudiyanselage Viraj Vidura Herath

Modelling of rainfall-runoff phenomenon continues to be a challenging task at hand of hydrologists as the underlying processes are highly nonlinear, dynamic and interdependent. Numerous modelling strategies like empirical, conceptual, physically based, data driven, are used to develop rainfall-runoff models as no model type can be considered to be universally pertinent for a wide range of problems. Latest literature review emphasizes that the crucial step of hydrological model selection is often subjective and is based on legacy. As the research outcome depends on model choice, there is a necessity to automate the process of model evolution, evaluation and selection based on research objectives, temporal and spatial characteristics of available data and catchment properties. Therefore, this study proposes a novel automated model building algorithm relying on machine learning technique Genetic Programming (GP).

State of art GP applications in rainfall-runoff modelling as yet used the algorithm as a short-term forecasting tool which produces an expected future time series very much alike to neural networks application. Such simplistic applications of data driven black-box machine learning techniques may lead to development of accurate yet meaningless models which do not satisfy basic hydrological insights and may have severe difficulties with interpretation. Concurrently, it should be admitted that there is a vast amount of knowledge and understanding of physical processes that should not just be thrown away. Thus, we strongly believe that the most suitable way forward is to couple the already existing body of knowledge with machine learning techniques in a guided manner to enhance the meaningfulness and interpretability of the induced models.

In this suggested algorithm the domain knowledge is introduced through the incorporation of process knowledge by adding model building blocks from prevailing rainfall-runoff modelling frameworks into the GP function set. Presently, the function set library consists with Sugawara TANK model functions, generic components of two flexible rainfall-runoff modelling frameworks (FUSE and SUPERFLEX) and model equations of 46 existing hydrological models (MARRMoT). Nevertheless, perhaps more importantly, the algorithm is readily integratable with any other internal coherence building blocks. This approach contrasts from rest of machine learning applications in rainfall-runoff modelling as it not only produces the runoff predictions but develops a physically meaningful hydrological model which helps the hydrologist to better understand the catchment dynamics. The proposed algorithm considers the model space and automatically identifies the appropriate model configurations for a catchment of interest by optimizing user-defined learning objectives in a multi-objective optimization framework. The model induction capabilities of the proposed algorithm have been evaluated on the Blackwater River basin, Alabama, United States. The model configurations evolved through the model-building algorithm are compatible with the fieldwork investigations and previously reported research findings.

How to cite: Babovic, V., Chadalawada, J., and Mudiyanselage Viraj Vidura Herath, H.: Physics Informed Machine Learning of Rainfall-Runoff Processes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12303, https://doi.org/10.5194/egusphere-egu2020-12303, 2020.

D140 |
EGU2020-18828
Juan F. Farfán and Luís Cea

Hydrological models are widely used for flood forecasting, continuous streamflow simulation and water resources management. The success of a hydrological model depends on different factors such as its formulation, data availability and parameter optimization. There are many approaches to identify the optimal parameter sets, which can be categorized in 1) Local search methods and 2) Global search methods. In the group of global search methods, swarm intelligence could provide an alternative to improve the application of surrogate models and to provide robust calibration. In the present study we evaluate the latter approach using a physically-based lumped model applied to 10 years of hydrologic data divided in 3 periods: 1) five years for calibration, 2) three years for validation (both statistically similar), and 3) two years for prediction. The prediction period is statistically non-similar to the calibration and validation periods. A Montecarlo simulation with 1000 parameter sets is run, and 4 goodness-of-fit coefficients are calculated for each parameter set in the calibration period: Nash-Sutcliffe Efficiency (NSE), adapted for peaks Nash-Sutcliffe Efficiency (ANSE), Kling & Gupta Efficiency (KGE), and adapted for peaks Kling & Gupta Efficiency (AKGE) coefficients. The parameter sets and its correspondent goodness-of-fit coefficients are configured as the training set of an artificial neural network surrogate model in order to generate a simulated solution space. Once the surrogate model is trained, a swarm intelligence-based approach is adapted in order to search in the simulated space. The swarm intelligence-based approach consists on an adaptation of the Artificial Bee Colony algorithm (ABC), which introduces a random variation in a parameter randomly selected in order to evaluate if there is any improvement in the goodness-of-fit values. The adaptation includes criteria to count improvements and non-improvements in the goodness-of-fit values to stop the search of solutions and a threshold criterion for selection of parameter sets. Only those sets that are above the threshold of the goodness-of-fit coefficients are selected to apply the swarm intelligence-based method.

The obtained parameter sets are evaluated with the hydrological model in order to calculate the goodness-of-fit values in the three stages (calibration, validation and prediction). In this step, those sets that provide wrong simulations are used as samples to update the neural network surrogate model for a new search iteration, and those that provide higher goodness-of-fit coefficients are saved.  Preliminary results show that this technique can provide a boost on the optimization problem with improvement ratios between 1.08 and 1.27 in the goodness-of-fit coefficients. Moreover, the parameter sets found applying this technique outperform those obtained with a local search method, especially in validation and prediction stages. Specifically, in the prediction stage, NSE of 0.77 and ANSE of 0.83 were obtained against NSE of 0.45 and ANSE of 0.57 for the local search parameter set.

Keywords: Artificial neural networks, artificial bee colony, surrogate modelling-based methods, global search methods, swarm intelligence.

How to cite: Farfán, J. F. and Cea, L.: A swarm intelligence-based method for hydrological model calibration through a simulated solution space , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18828, https://doi.org/10.5194/egusphere-egu2020-18828, 2020.

D141 |
EGU2020-11349
Alejandro Chamorro, Amirhossein Sahraei, Tobias Houska, and Lutz Breuer

Abstract In recent years, stable isotopes of water have become a well-known tool to investigate runoff generation processes. The proper estimation of stable water isotope concentration dynamics based on a set of independent multivariate variables would allow the quantification of event water fraction in stream water even at times when no direct measurements of isotopes are available. Here we estimate stable water isotope concentrations and derived event water fractions in stream water over 40 precipitation events. A mobile field laboratory was set up to measure high-resolution (20 min) stable isotopes of water by laser spectrometry. Artificial neural networks (ANN) were established to model the same information. We consider precipitation and antecedent wetness hydrometrics such as precipitation depth, precipitation intensity and soil moisture of different depths as independent variables measured in the same high-temporal resolution. An important issue is the reduction of the deviation between observations and simulations in both the training and testing set of the network. In order to minimize this difference, various combinations of variables, dimensionalities of the training and testing sets and ANN architectures are studied. A k-fold cross validation analysis is performed to find the best solution. Further constraints in the iteration procedure are considered to avoid overfitting. The study was carried out in the Schwingbach Environmental Observatory (SEO), Germany. Results indicate a good performance of the optimized model, in which the dynamics of the isotope concentrations and the estimated event water fractions in the stream water were estimated. Compared to a multivariate linear model, the ANN-based model clearly outperformed the estimations showing the smallest deviation. The optimum network consists of 2 hidden nodes with a 5-dimensional input set. This strongly suggests that ANN-based models can be used to estimate and even forecast the dynamics of the isotope concentrations and event water fractions for future precipitation events.

How to cite: Chamorro, A., Sahraei, A., Houska, T., and Breuer, L.: Simulating stable water isotope derived information with the aid of artificial neural network applied on independent multivariate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11349, https://doi.org/10.5194/egusphere-egu2020-11349, 2020.

D142 |
EGU2020-1536
Reza Taherdangkoo, Alexandru Tatomir, Mohammad Taherdangkoo, and Martin Sauter

Hydraulic fracturing fluid migration from the deep subsurface along abandoned wells may pose contamination threats to shallow groundwater systems. This study investigates the application of a nonlinear autoregressive (NAR) neural network to predict leakage rates of fracturing fluid to a shallow aquifer in the presence of an abandoned well. The NAR network was trained using the Levenberg-Marquardt (LM) and Bayesian Regularization (BR) algorithms. The dataset employed in this study includes fracturing fluid leakage rates to the aquifer overlying the Posidonia shale formation in the North German Basin (Taherdangkoo et al. 2019). We evaluated the performance of developed models based on the mean squared errors (MSE) and coefficient of determination (R2). The results indicate the robustness and compatibility of NAR-LM and NAR-BR models in predicting fracturing fluid leakage to the aquifer. This study shows that NAR neural networks are useful and hold a considerable potential for assessing the potential groundwater impacts of unconventional gas development.

References

Taherdangkoo, R., Tatomir, A., Anighoro, T., & Sauter, M. (2019). Modeling fate and transport of hydraulic fracturing fluid in the presence of abandoned wells. Journal of Contaminant Hydrology, 221, 58–68. https://doi.org/10.1016/j.jconhyd.2018.12.003

How to cite: Taherdangkoo, R., Tatomir, A., Taherdangkoo, M., and Sauter, M.: Nonlinear autoregressive neural networks to predict fracturing fluid flow into shallow groundwater, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1536, https://doi.org/10.5194/egusphere-egu2020-1536, 2020.

D143 |
EGU2020-1963
Hossein Foroozand and Steven V. Weijs

Machine learning is the fast-growing branch of data-driven models, and its main objective is to use computational methods to become more accurate in predicting outcomes without being explicitly programmed. In this field, a way to improve model predictions is to use a large collection of models (called ensemble) instead of a single one. Each model is then trained on slightly different samples of the original data, and their predictions are averaged. This is called bootstrap aggregating, or Bagging, and is widely applied. A recurring question in previous works was: how to choose the ensemble size of training data sets for tuning the weights in machine learning? The computational cost of ensemble-based methods scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The choice of ensemble size was often determined based on the size of input data and available computational power, which can become a limiting factor for larger datasets and complex models’ training. In this research, it is our hypothesis that if an ensemble of artificial neural networks (ANN) models or any other machine learning technique uses the most informative ensemble members for training purpose rather than all bootstrapped ensemble members, it could reduce the computational time substantially without negatively affecting the performance of simulation.

How to cite: Foroozand, H. and Weijs, S. V.: Entropy Ensemble Filter: Does information content assessment of bootstrapped training datasets before model training lead to better trade-off between ensemble size and predictive performance? , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1963, https://doi.org/10.5194/egusphere-egu2020-1963, 2020.

D144 |
EGU2020-13195
Tianrui Pang, Jiping Jiang, Bellie Sivakuamr, Yi Zheng, and Tong Zheng

Information entropy theory has been largely applied in hydrological modeling and engineering optimization. Recently the entropy description and explanation of reactive solute mixing and transport process has received increasing attentions. Literatures mainly focus theoretical analysis on hypothetical cases, however, the direct observation and calculation with field datasets are hardly reported.

This work studied the change of information entropy in surface water solute transport system with field data. A comprehensive information entropy based analysis framework were proposed, which works like a combined optical system with Optical Sources-Filters-Prisms-Images. We established four basic probability space, leading to four basic information entropy indexes: Dilution index (E), Flux index (F), Spatial entropy index (Gx) , and Temporal entropy index (Gt).

The evolution characteristic of information entropy in one-component solute diffusion system is studied by using the method of discrete information entropy analysis. In the system boundary definition of fixed observation, the information entropy appears a peak in time and space dimension, and the peak value of information entropy appears in the first 20%-30% of the fixed observation interval, while in the system boundary definition of dynamic observation, information entropy decreases continuously with the increase of time and space distance. Through the local sensitivity analysis of the hydrodynamic parameters of the above analytical solutions, it is found that the sensitivity of information entropy H to diffusion coefficient Dx is relatively constant, and the greater the degradation coefficient k is, the more sensitive the monitoring time t is to k, the more sensitive the spatial change of information entropy is to the change of flow velocity ux with the increase of distance, while the change of time is insensitive to ux.

Furthermore, the evolution characteristic of information entropy in complex water quality process of rivers is studied. The Guangming section of Maozhou River in Shenzhen is taken as the research area. BOD-DO and nitrogen elements (NH3-N, NO3-N, Org-N) water quality process were selected, and one-dimensional S-P model and WASP_EUTRO water quality model were constructed respectively. After model calibration and verification, the changing characteristics of information entropy, mutual information and information transfer index are analyzed under the system definition of fixed observation. It was found that the transformation reaction process gradually replaced the diffusion process in the complex water quality process as the main factor affecting the change of information entropy, and the information entropy change law in the single component diffusion process no longer exists in the complex water quality process.

How to cite: Pang, T., Jiang, J., Sivakuamr, B., Zheng, Y., and Zheng, T.: The Information Entropy Prisms on Riverine Water Quality Evolution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13195, https://doi.org/10.5194/egusphere-egu2020-13195, 2020.

D145 |
EGU2020-3391
Jieyu Li, Ping-an Zhong, Minzhi Yang, and Qingwen Lu

Real-time joint operation of multi-reservoirs is significant to flood control and management of river basin. The high dimension of joint operation model and complex decision-making environment are still the main problems. This paper develops a general framework for reducing the complexity of flood control operation by identifying effective reservoirs. First, considering the factors which influence the reservoir flood control effect, a criteria system for identifying effective reservoirs is proposed. Then, different classification models based on ensemble learning are established. In real-time operation, the intelligent identification of effective reservoirs is carried out by sensing real-time information of the temporal and spatial distribution of storm floods and the variation of reservoir flood control capacity. On this basis, a hybrid equivalent operation model is established adaptively, which consists of a joint operation model of effective reservoirs and separate operation models of noneffective reservoirs. A case study of the flood control system located in the Huaihe River basin in China indicates that: (1) the ensemble learning classification models can identify effective reservoirs according to real-time information of flood and reservoirs dynamically. (2) the flood control effect of the hybrid equivalent operation model is similar to that of the joint operation model of all reservoirs. Obviously, in real-time flood control operation, the proposed method can realize the dynamic combination of two operation modes under different flood control situations, make the best use of reservoir storage capacity and reduce the complexity of flood control operation.

Key words: multi-reservoir system; real-time flood control operation; effective reservoir; ensemble learning; hybrid equivalent operation model

How to cite: Li, J., Zhong, P., Yang, M., and Lu, Q.: Ensemble learning for dynamic modeling in flood control operation of multi-reservoir systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3391, https://doi.org/10.5194/egusphere-egu2020-3391, 2020.

D146 |
EGU2020-3419
Shaokun He, Dimitri Solomatine, Oscar Marquez-Calvo, and Shenglian Guo

Abstract: Modern water resource management requires a more robust flood control operation of cascade reservoirs to cope with a more dynamic external environment, whose ultimate goal is to ensure the robust optimization for multiple purposes. To this end, a number of studies with the theme of flood control operation have developed various methods for robust optimization in the presence of uncertainties and in some cases, they may work well. However, these approaches usually incorporate uncertainty into the flood control objectives or constraints and consequently lack explicit robustness indicators that can assist the decision-makers to fully assess the impact of the uncertainty. In order to construct a mature framework of explicit robust optimization of flood control operation, this study uses the Robust Optimization and Probabilistic Analysis of Robustness (ROPAR) technique to identify the robust flood limited water levels of cascade reservoirs for satisfactory compromise hydropower production and flood control risk taking into account the streamflow variability during the flood season: (1) The Monte Carlo method is employed to sample the input set according to the historical streamflow records; (2) The effective non-dominated sorting genetic algorithm II algorithm (NSGA-II) generates a series of Pareto fronts for each hydrograph sample; (3) the ROPAR technique helps building the empirical distribution of the values of hydropower production corresponding to the chosen levels of flood control risk and carry out probabilistic analysis of the Pareto fronts; (4) the ROPAR technique identifies the final robust solutions according to certain criteria. A reservoirs cascade in the Yangtze River basin, China, is considered as a case study. The presented approach allows for studying propagation of uncertainty from the uncertain inflow to the candidate optimal solutions, and selecting the most robust solution, thus better informing decisions related to reservoir operation.

Key wordsmulti-objective reservoir system, robust optimization, uncertainty, flood control operation, Yangtze River basin

Reference:

Marquez-Calvo, O.O., Solomatine, D.P., 2019. Approach to robust multi-objective optimization and probabilistic analysis: the ROPAR algorithm. J Hydroinform, 21(3): 427-440. DOI:10.2166/hydro.2019.095

How to cite: He, S., Solomatine, D., Marquez-Calvo, O., and Guo, S.: Towards robust optimization of cascade operation of reservoirs considering streamflow uncertainty, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3419, https://doi.org/10.5194/egusphere-egu2020-3419, 2020.

D147 |
EGU2020-12572
Felipe Sierra, Jorge Sanabria, Gerald Corzo, and Germán Santos

Reservoir operation has been a task that always relate to integrated water resources concepts, the rules of such systems require to adapt to changes in the uses of water or in their prioritization. The storage body of La Copa reservoir, located in the upper Chicamocha river basin in Colombia was originally built with the objective of mitigating the floods over the upper Chicamocha valley. However, an irrigation district was latter established, with the objective of supplying water to farmers. This study presents the analysis and optimization of operational rules to minimize the likelihood of floods and shortages for the irrigation district. This is done by contemplating the uncertainty in the hydrological system.

A methodology is developed to obtain the optimal management and operation of the reservoir, aiming at reducing droughts and flood, which will end up in a regulated basins. A simulation model of the reservoir using the HEC-ResSim tool was used to aim at an optimal guide curve. The guide curve in this study is the base for operational decisions. A continuous simulation hydrological model using the HEC-HMS tool. The model was calibrated using annual series of daily flows as input into the reservoir model.  A two-dimensional hydrodynamic model using (HEC-RAS 2D) was used to test the results of regulation through the comparison of the simulations of the current and optimal regulation conditions.  Several guide curves were developed for the evaluation of the operation. Four of them among are selected and tested using the HEC-ResSim model through the quantification of the minimum and maximum volumes discharge failures. Finally, the guide curve with the least number of failures was selected as the one that provides the best system operation. The benefits of the selected guide curve were verified by the transit of the regulated hydrographs in the 2D hydraulic model. The simulation was carried out in the most period in terms of flows and maximum rainfall, from April 06 to May 15, 2011. The period between April 15 and 21 has the highest flow through the critical sector. On the other hand, unregulated conditions were evaluated using the flows of the hydrological model. It is found that the channel presents a notable improvement, in the simulation of April 15, through the discharges made in a controlled manner from the La Copa reservoir. The methodology presents a simple and practical way to obtain relative optimal operational rules for a multipurpose storage.

How to cite: Sierra, F., Sanabria, J., Corzo, G., and Santos, G.: Determination Of The Optimal Guide Curve For A Reservoir, Case Study Copa Dam, Boyacá, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12572, https://doi.org/10.5194/egusphere-egu2020-12572, 2020.

D148 |
EGU2020-11729
German Ricardo Santos Granados, Jordi Rafael Palacios Gonzalez, Jorge Alberto Escobar Varga, and Gerald Augusto Corzo Perez

Particle tracking is very important  for the appropriate management of water resources. Morphological heterogeneities of rivers make the prediction of the particle motions difficult due to the complex numerical and physical variations in the mathematical formulation. Data availability in recent years have allowed to extend dimensionality of the problem and even use coupled models for a better understanding of those patterns. Aside from this, the hydrogeomorphic characteristics of Mountain Rivers are poorly studied around the world. In certain cases, like the river la Miel in Colombia, there are strong dynamic associated with external variables like the operation of a reservoir. The environmental conditions of the operation and the transport of particles are important to determine environmental impacts of the operation. In this research, a hydrodynamic modeling exercise coupled with particle tracking was developed to determine transport patterns.  The development of this model was carried out using the Delft 3D software. Information about the hydrophysical recognition in “La Miel” river downstream of “La Miel” hydroelectric complex located in Caldas -Colombia was gathered in a campaign on 21 and 27 of July 2019. The bathymetries were collected using a ECHOMA 54v, and velocities of the river obtained with and ADCP River Ray, for a 10 km length. Data correction have been done so the digital elevation model was made and the topographic conditions for the construction of the two dimensional hydrodynamic modeling system fitted a logical representation.  Permanent flow was assumed, because the variation of the areas and hydraulic conditions that are only influenced by the rules of Hydroelectric operation. Finally, the hydrodynamic model coupling was performed with the “following-up” model of particles to determine transport patterns. The main result of this research is still to follow in a project that aims to describe the movement and behavior of small marine species, the travel trajectory of a pollutant and other local uses such as forensic investigation in rivers. Results will also be used to study the dynamics of high mountain rivers.

How to cite: Santos Granados, G. R., Palacios Gonzalez, J. R., Escobar Varga, J. A., and Corzo Perez, G. A.: Determination of particle transport patterns in a high mountain river influenced by the construction of reservoirs, using particle tracking techniques and hydrodynamic modeling, case study: Río La Miel., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11729, https://doi.org/10.5194/egusphere-egu2020-11729, 2020.

D149 |
EGU2020-9880
Paolo Ciampi, Carlo Esposito, Giorgio Cassiani, and Marco Petrangeli Papini

The management of a contaminated site requires to integrate simultaneously the information related to the hydrogeophysical sphere in all its dimensions. The construction of a 3D multidisciplinary geodatabase and the realization of an integrated model constitute the tools for the management, the fusion, the integration, and the analysis of multi-source data. The research aims to demonstrate the contributions of a multiple lines approach leading to the refinement of the Conceptual Site Model (CSM), the assessment of contamination, and successful remediation of a polluted site. An illustrative case history is here presented. It concerns the military airport of Decimomannu (Cagliari, Italy), affected by various aviation fuel (jet phuel-JP8) spills in 2007 (40000 L), in 2009 (5000 L), and in 2010 (5000 L). A multiscale approach was followed for the creation of a 3D hydrogeophysical model which acts as an effective “near real time” decision support system able to manage and release data during the different remediation phases from the site characterization up to the proper remediation intervention, all by allowing the user to view, query and process data in 3D space. The construction of a multi-source conceptual model along with Laser Induced Fluorescence (LIF) and Electrical Resistivity Tomography (ERT) capture the information related to the hydrogeochemical sphere in all its dimensions. The 3D pseudo-real visualization catches the high resolution characterization of geological eterogeneity and contaminated bodies at the scale of pollution mechanisms and decontamination processes. The physicochemical and data-driven model, which links geophysical signals to contaminant characteristics within contaminated porous media, explains the observed contaminant-geophysical behaviour. The interpretation of contaminant dynamic has strong implications for the reliability of the CSM, affecting the selection and the performance of remediation strategy. The display of integrated data allows a real-time interaction with the multi-source model (and the 3D geodatabase), to extract useful information for the decision-making processes during the different stages of remediation. The rich data set, and the data-driven models comprise, collect, and establish a connection between the environmental variables. They optimize the contribution of each aspect and support unequivocally the design and the adoption of an effective and sustainable clean-up intervention.

How to cite: Ciampi, P., Esposito, C., Cassiani, G., and Petrangeli Papini, M.: A 3D Multi-Source Conceptual Model to Support the Remediation of a Jet Fuel Contaminated Site, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9880, https://doi.org/10.5194/egusphere-egu2020-9880, 2020.

D150 |
EGU2020-20523
Nithin Shettigar, Anna Spinosa, Lorinc Meszaros, Sandra Gaytan Aguilar, and Ghada El Serafy

During the past decades, aquaculture industry has developed rapidly and, due to a continuously growing market, a high demand for new installations, both near and off-shore, is already observed. Various studies show direct correlations between fish growth parameters and water quality variables, among which the most important are temperature, salinity and dissolved oxygen concentration. Those variables can directly impact planning and farming operations such as location, height and depth of cages, stocking density, or fish feeding rate. Moreover, for a sustainable seafood production, necessary management practices should be in place aiming to reduce food waste and spread of diseases.

At present, large section of the farming sector depends on ad-hoc measurements of water quality without a forecast mechanism. With availability of ocean hydrodynamic and water quality data from various sources such as Copernicus Marine Environmental Monitoring Service (CMEMS), atmospheric data from European Centre for Medium-Range Weather Forecasts (ECMWF), water quality variables can be simulated and forecasted well in advance with the use of numerical modelling tools.

Within the framework of the EU H2020 funded HiSea Project, a new high-resolution coastal 3D hydrodynamic model aiming at describing the vertical gradients of temperature and salinity and their seasonal variations is developed for southern Aegean Sea of Greece. The Delft3D Flexible Mesh modelling tool is used which allows for computationally economic grid development. Data from CMEMS are utilized to setup the model boundary conditions. A complex heat flux model of temperature computations is employed, which means that the model needs to be provided with several atmospheric forcing data such as wind speed, air temperature, dew point temperature, and mean sea level pressure. These data are derived from ERA5 single level reanalysis data of ECMWF. The output variables show a seasonal trend due to changes in atmospheric forces. Therefore, the developed model simulates seasonal water quality conditions and gives important insights into the vertical gradient of temperature and salinity. Validation of the model outputs is carried out at multiple levels. The water level simulation is verified against Intergovernmental Oceanographic Commission (IOC) mean sea level measurements while the simulated temperature at the two aquaculture sites is verified against the daily in-situ measurements.  

The uncertainties in the model outputs (temperature and salinity) are estimated through ensemble simulation using different atmospheric forcing from ERA5 and perturbed model process parameters as source of uncertainty. The application of ensemble simulations to understand the vertical gradients of the water quality parameters is a unique approach. Moreover, the application of the numerical model simulations to optimize the aquaculture planning and operation is innovative. The research could be replicated for other marine sectors where water quality variables are of paramount importance.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 821934.

How to cite: Shettigar, N., Spinosa, A., Meszaros, L., Gaytan Aguilar, S., and El Serafy, G.: Ensemble simulation of sea water temperature and salinity and their seasonal variations in vertical gradient – An application to aquaculture operations in Southern Aegean Sea, Greece, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20523, https://doi.org/10.5194/egusphere-egu2020-20523, 2020.

D151 |
EGU2020-19770
Ekaterina Rets, Maria Kireeva, and Timophey Samsonov

The study presents an approach to automatic river hydrograph separation and analysis implemented in GrWat open source package for R programming language. In the proposed scheme of hydrograph separation, river hydrograph is separated into base and quick flow. For plain rivers quick flow is further separated into seasonal snowmelt flood quick flow; rain quick flow and thaw quick flow. For mountainous rivers seasonal snowmelt flood quick flow component is divided into “basic snowmelt flood” component and overlapping rain floods. Base and quick runoff is separated by a critical gradient. Flash-floods are separated from the seasonal snowmelt wave by critical values of air temperature and precipitation on the event for the plain rivers and using a critical gradient concept for mountainous rivers. More than 30 characteristics of river runoff regime are calculated for each water resource year: characteristics of annual and seasonal runoff, contribution of each genetic component, characteristics of maximum runoff, n-day minimum discharges and dates when they are observed. Additionally, more than 50 characteristics of each flash-flood are calculated:  characteristics of shape, volume, timing of flash-floods, the values of meteorological parameters that bring about different types of floods. The presented approach to automatic river hydrograph separation and analysis was tested on 45 plain rivers in the European part of Russia in different climatic zones and on 10 mountainous rivers in the North Caucasus. The result of application provides a possibility for analyzing previously unstudied characteristics of river runoff regime and its climate-related transformation on the European part of Russia.

The study was supported by the Russian Science Foundation grant No. 19-77-10032

How to cite: Rets, E., Kireeva, M., and Samsonov, T.: Automatic hydrograph separation approach provides possibility to look at less-studied characteristics of water regime, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19770, https://doi.org/10.5194/egusphere-egu2020-19770, 2020.

D152 |
EGU2020-3038
Andrej Vidmar and Mitja Brilly

The calibration of the parameters of the conceptual model based on the Gauss-Marquardt-Levenberg (GLM) procedure in combination with singular value decomposition and Andrey Tikhonov regularization allows the calculation of the exact parameter values by synthetically determined flows. With this procedure, the calibration noise is practically eliminated in simulating phenomena based on the measured values of the output variable. The noise in the calculation results is practically the same when calibrating and validating the results. The residual noise in the results is due to the noise of the concept of the model, the design of the model, and the accuracy of the measurements themselves.

An analysis based on synthetically determined discharges is selected for the study. Instead of measurements, we calibrated the model by calculated discharges with known parameters and performed the calibration procedure. Thus, we eliminated measurement noise, model conception noise, and model design noise from the results. From a mathematical standpoint, perfect calibration can be expected in the calibration process, or the deviations are due to the noise contained in a particular calibration procedure. The differences between the calculation and the synthetic result contain only the noise of the calibration process.

For the hydrological model, we have chosen a version of the HBV program called HBV-light. The model is partially distributed since it allows the basin to be divided into smaller sub-basin units. Each sub-basin can be further subdivided into smaller areas based on land use and altitude. The model includes the following computational procedures that describe hydrological processes: snow accumulation and melting, evapotranspiration assessment and soil moisture calculation, subsoil runoff, and water flow transformation in a riverbed (Bergström, 1995; IHMS, 1999).

The calculations were performed on the HBV-Light software on the test model, Dreta river model, and the Savinja river basin model, a tributary of the Sava River in Slovenia. The test model has 16 parameters, and we have achieved full calibration accuracy with the GLM calibration process. The Dreta River model on the head part of the Savinja River Basin contains 34 parameters. The results of the calculations revealed weaknesses in the concept of the model. The Savinja River Basin subdivided into 77 sub-basins and results of the calculations showed the benefits of using regularization when calibrating the model.

How to cite: Vidmar, A. and Brilly, M.: Structural noise analysis in the simulation of hydrological models using a synthetically defined output variable, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3038, https://doi.org/10.5194/egusphere-egu2020-3038, 2020.