HS3.1 | Hydroinformatics: data analytics, machine learning, hybrid modelling, optimisation
EDI
Hydroinformatics: data analytics, machine learning, hybrid modelling, optimisation
Convener: Claudia Bertini | Co-conveners: Amin Elshorbagy, Alessandro Amaranto, Niels Schuetze, Francesca Pianosi
Orals
| Tue, 16 Apr, 08:30–12:25 (CEST)
 
Room 3.16/17, Wed, 17 Apr, 08:30–10:10 (CEST)
 
Room 3.16/17
Posters on site
| Attendance Tue, 16 Apr, 16:15–18:00 (CEST) | Display Tue, 16 Apr, 14:00–18:00
 
Hall A
Orals |
Tue, 08:30
Tue, 16:15
Hydroinformatics has emerged over the last decades to become a recognised and established field of independent research within the hydrological sciences. It is concerned with the development and application of mathematical modelling, information technology, systems science and computational intelligence tools in hydrology. Hydroinformatics nowadays also deals with collecting, handling, analysing and visualising Big Data coming from remote sensing, Internet of Things (IoT), earth and climate models, and defining tools and technologies for smart water management solutions.
The aim of this session is to provide an active forum in which to demonstrate and discuss the integration and appropriate application of emergent techniques and technologies in water related context.
Topics addressed in the session include:
* Predictive and exploratory models based on the methods of statistics, computational intelligence, machine learning and data science: neural networks, fuzzy systems, genetic programming, cellular automata, chaos theory, etc.
* Methods for the analysis of Big Data and complex datasets (remote sensing, IoT, earth system models, climate models): principal and independent component analysis, time series analysis, clustering, information theory, etc.
* Optimisation methods associated with heuristic search procedures (various types of genetic and evolutionary algorithms, randomised and adaptive search, etc.) and their application to the planning and management of water resources systems
* Multi-models approaches and hybrid modelling approaches that blend process-based (mechanistic) and data-driven (machine learning) models
* Data assimilation, model reduction in integrated modelling, and High Performance Computing (HPC) in water modelling
* Novel methods for analysing and quantifying model uncertainty and sensitivity
* Smart water data models and software architectures for linking different types of models and data sources
* IoT and Smart Water Management solutions
* Digital Twins for hydrology and water resources
Applications could belong to any area of hydrology or water resources, such as rainfall-runoff modelling, hydrometeorological forecasting, sedimentation modelling, analysis of meteorological and hydrologic datasets, linkages between numerical weather prediction and hydrologic models, model calibration, model uncertainty, optimisation of water resources, smart water management.

Dear authors,

please remember to upload your presentation online, you have received an email from Copernicus with the link and instructions. Oral presentations will be of 8 minutes each, with 2 minutes for Q&A. See you tomorrow!

Orals: Tue, 16 Apr | Room 3.16/17

Chairpersons: Claudia Bertini, Alessandro Amaranto
Artificial Intelligence for water resources: hydrological monitoring, modelling and forecasting
08:30–08:35
08:35–08:45
|
EGU24-5798
|
On-site presentation
Ashaf Ahmed and Antoifi Abdoulhalik

Hybrid machine learning (ML) models have exhibited great forecasting accuracy across all water-related fields, often showing promising greater performance and computational power over standalone ML algorithms. Meanwhile, transformers have demonstrated remarkable capabilities in natural language processing tasks due to their attention mechanism; their application to time series forecasting, particularly in hydrology or streamflow prediction, is an evolving area. This study compared the performance of Transformers to various hybrid deep learning models in forecasting river streamflow data in Syr Daria. The hybrid models included LSTM with attention mechanism (LSTM-AM), LSTM with Arima (LSTM-AR), and Convolutional Neural Networks combined with LSTM (ConvLSTM). The forecasting performance of each model was tested at three hydrological stations located upstream, midstream, and downstream along the Syr Darya River, respectively. The forecasting performance was evaluated by comparing RMSE, MAE, NSE and KGE values achieved by each model. The streamflow datasets exhibit short-term dependencies that LSTM models can capture effectively, while transformers are more parameter-intensive than LSTMs. Simpler models like LSTMs perform relatively well and achieve comparable predictive accuracy to hybrid models. While LSTM-based models are found to be better suited for short-term forecasting, the transformer model tends to excel in longer-term predictions as they are better at capturing long-range dependencies in sequences. Nonetheless, all the models exhibit a decrement in predictive performance with an increasing forecasting horizon. The findings of this study evidence the suitability of transformers for high-performance and budget-wise river flow forecast applications while minimising data processing time.

How to cite: Ahmed, A. and Abdoulhalik, A.: The Application of Transformers and Hybrid Deep Learning Models for Streamflow Forecasting , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5798, https://doi.org/10.5194/egusphere-egu24-5798, 2024.

08:45–08:55
|
EGU24-8899
|
ECS
|
On-site presentation
Martin Gauch, Frederik Kratzert, Vusumuzi Dube, Oren Gilon, Daniel Klotz, Asher Metzger, Grey Nearing, Florence Ofori, Guy Shalev, Shlomo Shenzis, Tadele Tekalign, Dana Weitzner, Oleg Zlydenko, and Deborah Cohen

Deep learning approaches have emerged as the state of the art for rainfall–runoff modeling. Yet—until now—the best-performing models have typically been used with inputs that are averaged across possibly large catchment areas, modeling each (sub-)basin independently. This lumped modeling approach is in contrast to reality, where rivers form networks of connected subbasins. This discrepancy limits our ability to accurately and interpretably predict certain types of rivers. Here, we present recent work to build graph-based deep learning models that explicitly account for this network structure. These models promise to unlock improvements in both the quality and interpretability of predictions:

Catchment size: Lumped models lack information about spatial heterogeneity, i.e., they do not know where inside a basin events (such as precipitation) occur. This makes it hard to model large basins, especially at high temporal resolution. Spatially distributed models can explicitly learn to account for travel times between subbasins inside a larger catchment, which also allows to analyze runoff generation separately from routing behavior.

Data assimilation: In lumped models, it is hard to include (real-time) measurements from upstream river sections that could improve predictions at downstream locations. In a graph-based model of connected subbasins, any downstream prediction can benefit from upstream information that is propagated along the river network.

Human intervention: It is unclear how to explicitly represent human water extraction, reservoirs, or dams in lumped models. Graph-based models provide more flexibility, as we can account for the spatial location of human interventions explicitly in the graph structure and learn to represent their influence on runoff.

In this work, we compare different types of spatially distributed deep learning models with lumped deep learning models and traditional physics-based hydrologic modeling and routing approaches on >600 gauges of the LamaH dataset [1].


[1] Klingler, C., Schulz, K., and Herrnegger, M.: LamaH-CE: LArge-SaMple DAta for Hydrology and Environmental Sciences for Central Europe, Earth Syst. Sci. Data, 13, 2021.

How to cite: Gauch, M., Kratzert, F., Dube, V., Gilon, O., Klotz, D., Metzger, A., Nearing, G., Ofori, F., Shalev, G., Shenzis, S., Tekalign, T., Weitzner, D., Zlydenko, O., and Cohen, D.: Deep Learning for Spatially Distributed Rainfall–Runoff Modeling, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8899, https://doi.org/10.5194/egusphere-egu24-8899, 2024.

08:55–09:05
|
EGU24-5057
|
On-site presentation
Yiheng Du and Ilias G. Pechlivanidis

Local observations can drive machine learning (ML) based post-processors for improving hydrological model accuracy and reliability, and further ensuring that model outputs represent the local hydrological conditions. However, post-processing large-scale hydrological models is not straightforward and remains particularly challenging in ungauged basins. This study presents an ML-based post-processing approach which allows streamflow regionalization based on the hydrological characteristics of the river systems. We employed Long Short-Term Memory (LSTM) models to tailor the simulated streamflow obtained from the E-HYPE hydrological model to local observations across the pan-European domain. Here, we took advantage of the European hydrologically similar regions identified in Pechlivanidis et al. (2020), while LSTM was trained to map the simulated and observed runoff for each hydrologically similar cluster. The catchments in each cluster were divided into training and testing datasets under a K-fold cross validation approach. We compared the raw and post-processed simulations using different evaluation metrics capturing general bias (Mean Absolute Error), high flows (Nash-Sutcliffe Efficiency; NSE), and low flows (log-NSE). The results indicate that the regionalized LSTM approach enhances the hydrological model performance, evidenced not only at the stations incorporated within the training set, but also at those excluded from the training set. Overall, this study highlights the potential of ML in post-processing hydrological model outputs, especially in ungauged basins, setting a promising framework for AI-enhanced large-scale hydro-climate services.

 

References

Pechlivanidis, I. G., Crochemore, L., Rosberg, J., & Bosshard, T. (2020). What are the key drivers controlling the quality of seasonal streamflow forecasts? Water Resources Research, 56, e2019WR026987. https://doi.org/10.1029/2019WR026987

How to cite: Du, Y. and G. Pechlivanidis, I.: Enhancing Large-Scale Hydro-Climate Services Through a Regionalized Machine Learning Approach, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5057, https://doi.org/10.5194/egusphere-egu24-5057, 2024.

09:05–09:15
|
EGU24-8537
|
ECS
|
On-site presentation
Bram Droppers, Marc Bierkens, and Niko Wanders

Global hydrological models (GHMs) are an important tool for sustainable development making in today’s water-scarce world. These models enable assessment of water scarcity by estimating both the natural water cycle and human activities around the world. Moreover, their process-based structure allows for projections under diverse climate change and socio-economic scenarios; information that is essential to support sustainable water management. Nevertheless, the need for better, higher resolution and larger ensemble simulations is reaching the limit of what is computationally feasible.
Recently, the deep-learning community has shown the potential of neural networks in providing highly accurate and computationally cheap hydrological predictions. This development has let to the emergence of deep-learning model surrogates that mimic process-based hydrological simulations using neural networks. Yet, the majority of these surrogates are restricted to assessing land-surface water fluxes at a singular spatial resolution, thereby limiting their application for global hydrological models.
We present a novel framework to create deep-learning global hydrological surrogates, with two salient features. First, our surrogate framework integrates spatially-distributed runoff routing that is essential to estimate water availability and human water withdrawals. Second, our surrogate framework offers scalability across various spatial resolutions and can match the wide variety of resolutions at which global hydrological models are applied.
To test our framework, we developed a deep-learning surrogate of the PCRaster Global Water Balance (PCR-GLOBWB) global hydrological model. The surrogate encompasses all water-balance components, including the impact of human activities on the water system. The PCR-GLOBWB surrogate runs faster than its process-based counterpart and performs well when compared to the original model’s output at different spatial resolutions. Interestingly, the multi-resolution surrogate actually outperforms model surrogates trained for a single resolution, even on their target resolution.
Deep-learning surrogates are a useful tool for the global hydrological modeling community, enabling enhanced model calibration (through parameter learning and flux matching) and more detailed model simulations. Our framework provides an excellent foundation for the community to create their own multi-scale deep-learning model surrogates.

How to cite: Droppers, B., Bierkens, M., and Wanders, N.: A multi-resolution deep-learning surrogate framework for global hydrological models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8537, https://doi.org/10.5194/egusphere-egu24-8537, 2024.

09:15–09:25
|
EGU24-10765
|
On-site presentation
Bernt Viggo Matheussen, Rajeev Shrestha, and Bjarte Beil-Myhre

Accurate inflow forecasts play a crucial role in the daily operations of hydropower reservoirs. Practitioners in the hydropower industry typically combine physically based models coupled with weather forecasts to produce inflow forecasts to the reservoirs. Despite the emergence of physically-based hydrological models since the early 1960s, their growing complexity has posed challenges in usage and calibration. Recent work by Kratzert et al. (2018) suggests that non-linear regression models such as LSTM neural networks (Hochreiter & Schmidhuber, 1997) may outperform traditional physically based models. Given the plethora of hydrology models, it is crucial to identify the most effective configurations within a diverse range of catchments using objective quantitative performance criteria.

This research aims to evaluate various model configurations across multiple catchments, determining the optimal hydrological model for streamflow prediction. Two physics-based models, the Distributed Regression hydrological Model (DRM) by Matheussen at Å Energi and the Statkraft Hydrology Forecasting Toolbox (SHyFT) from Statkraft, were applied alongside two versions of LSTM models tested as standalone and hybrid models with different input and model configurations. Thirteen model configurations underwent testing in sixty-five catchments in southern Norway. The models, including LSTM networks, were trained on either one catchment (Local) or all catchments (Regional) and tested using two train/test periods and two objective criteria: Nash-Sutcliffe Efficiency (NSE) and Kling Gupta Efficiency (KGE). The validation scores for NSE and KGE during the two train-test periods were used for benchmarking. 

Daily observed climate and streamflow data stems from The Norwegian Water Resources and Energy Directorate (NVE), The Norwegian Meteorological Institute, Å Energi's internal databases and ECMWF (ERA5). We extracted digital elevation data, land cover types, and vegetation information from 
www.hoydedata.no and the CORINE Land Cover inventory. The key findings of the study shows that the data-driven model outperformed physically based models (SHyFT, DRM). Hybrid models, incorporating output from physics models and meteorological data, surpassed purely data-driven models. The most successful configuration involved two hybrid models, utilizing an LSTM network forced with outputs from physically based models and climate  forcings. The results clearly demonstrates that information about the physical hydrological processes enhance the LSTM model performance.

How to cite: Matheussen, B. V., Shrestha, R., and Beil-Myhre, B.: Benchmarking physics-based, machine learning, and hybrid hydrology models at multiple catchments in Southern Norway., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10765, https://doi.org/10.5194/egusphere-egu24-10765, 2024.

09:25–09:35
|
EGU24-16560
|
On-site presentation
Biswa Bhattacharya, Manzur Rahman, and Leonardo Alfonso

Understanding morphological changes in river basins is important for efficient river basin management. Such studies on morphological changes can be efficiently carried out by processing remote sensing images. In this respect Google Earth Engine (GEE) provides an efficient platform for processing of such images. In this research an automatic identification of land and water from satellite imageries was carried out by using the water index based unsupervised classification approach. Multiple water indices were compared for the Brahmaputra River (Assam reach). Landsat images from 1990 to 2020 were used in the study. Various thresholding approaches available in the literature were applied to determine which particular index in combination with which thresholding approach provides the best classification. We found the Normalized Difference Water Index (NDWI) as the best index for unsupervised classification. Subsequently, a supervised classification approach was adopted to classify land and water from satellite images. In particular, three classification methods were used, namely: Classification and Regression Tree (CART), Random Forest (RF) and Support Vector Machine (SVM). The classifiers were trained with B2, B3, B5 and B6 (Blue, Green, NIR and SWIR1) bands as the input and the pre-classified pixel class as the output. The resulting classified images were compared with the Joint Research Centre (JRC) monthly water maps and a good accuracy was observed. Among the three methods RF gave the best results. For the Brahmaputra basin we noticed a lot of morphological changes during 1990 and 2020. This research showed that given good quality training data classifiers can be built, which can automatically extract water bodies from Landsat or similar images and can be used in morphological assessment of any river basin.

Keywords: Surface water extraction, supervised classification, Brahmaputra, Google Earth Engine (GEE).

How to cite: Bhattacharya, B., Rahman, M., and Alfonso, L.: Machine learning in classifying morphological changes in the Brahmaputra River using multi-spectral Landsat images in the Google Earth Engine, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16560, https://doi.org/10.5194/egusphere-egu24-16560, 2024.

09:35–09:45
|
EGU24-17807
|
ECS
|
On-site presentation
|
Tanja Morgenstern, Jens Grundmann, and Niels Schütze

Floods present one of the most frequent natural hazards in Germany. In order to efficiently and successfully deal with instances of floods we need timely and reliable forecasts of the expected runoff. During the last decades, several deep learning methods proved to be valuable for rainfall-runoff modelling in many larger catchments, especially Long Short-Term Memory (LSTM) networks. One of the core challenges of data-driven models is the provision of an extensive and informative training database, implicitly describing the cause-effect relationships for different catchment conditions. However, the data basis of individual catchments regarding flood events may be limited due to short observation time series as well as a general lack in flood events during the observation period, which may cause flawed data-driven models for flood forecasting. These problems become even more pronounced in hourly flood forecasts for small-scale, fast responding catchments.

In this study with the purpose of hourly forecasting runoff events in small-scale Saxon catchments, we solved aforementioned dilemma through an extension of the training database from single-catchment datasets (“local network training”) to one dataset containing multiple catchments from one bigger region (“regional network training”). In consequence, we trained our LSTM networks on hourly rainfall and runoff time series of preselected rainfall-runoff-events from 52 Saxon catchments. Alongside these time series, we included a selection of attributes regarding the catchment’s characteristics and its climate, which allows the model to differentiate between catchments and to condition the runoff generation according to the catchment characteristics. In this contribution, we show that our regional network training facilitates rainfall-runoff simulations even at gauging sites with short observation periods – too short to enable local network training – and in in extreme cases even at ungauged catchments during flood events. We further discuss the following questions:

  • Which catchment attributes have the highest influence on the quality of hourly flood forecasting in regional network training? The selection of attributes contains topography (e.g. area, catchment shape, elevation, slope & river length), land use (e.g. sealing of the ground & vegetation) as well as climatic conditions (e.g. aridity, yearly potential evapotranspiration and rainfall).
  • In which case may the catchment attributes be omitted in regional network training?
  • When do local and regional network trainings result in flood forecasts of similar quality?

How to cite: Morgenstern, T., Grundmann, J., and Schütze, N.: Flood Forecasting with Deep Learning LSTM-Networks: Relevance of Catchment Attributes in Regional Network Training, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17807, https://doi.org/10.5194/egusphere-egu24-17807, 2024.

09:45–09:55
|
EGU24-15808
|
ECS
|
Virtual presentation
Integrating machine learning technique with SWAT model to optimize the prediction of peak flow in the Beas River basin, India
(withdrawn)
Saran Raaj, Vivek Gupta, Vishal Singh, and Dericks Shukla
09:55–10:05
|
EGU24-4390
|
On-site presentation
Erfan Goharian, Seyed Mohammad Hassan Erfani, and Mehdi Hatami Goloujeh

The persistent global threat of water-related challenges, particularly floods, necessitates a paradigm shift towards harnessing new technologies, heterogeneous sources of data, and novel techniques to enhance data availability and innovative sensing techniques in hydrology. Emerging data sources, including ground-based cameras, smart hydrologic monitoring systems, citizen observatories, and crowdsourcing, along with innovative techniques like Artificial Intelligence (AI), provide diverse yet novel data sources for more effective monitoring, modeling, and management. This research contributes to this transformative journey by exploring the integration of real-time imagery data from different tools and sources into hydrologic monitoring. Highlighting our efforts is the development of ATLANTIS, the first comprehensive image dataset for semantic segmentation of water bodies and associated objects. We introduce AQUANet, a state-of-the-art deep neural network crafted for precise waterbody segmentation, addressing challenges such as flood detection and inundation mapping. The study further demonstrates flood modeling using cutting-edge deep learning networks, including PSPNet, TransUNet, and SegFormer. Rigorous comparisons against reference data collected through field instruments and sensors underscore the superior performance of SegFormer, achieving an impressive 99.55% Intersection over Union (IoU) and 99.81% accuracy in hydrological monitoring, specifically in water level estimation at our testbed rivers and channels. In conclusion, this presentation not only showcases achievements in flood monitoring using innovative AI techniques and diverse data sources but also discusses how future studies can contribute to the ongoing discourse on the application of advanced technology in hydrologic monitoring systems, paving the way for further innovation and improvements in flood management.

How to cite: Goharian, E., Erfani, S. M. H., and Hatami Goloujeh, M.: Harnessing Heterogeneous Sources of Data and Artificial Intelligence for Hydrologic Monitoring, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4390, https://doi.org/10.5194/egusphere-egu24-4390, 2024.

10:05–10:15
Coffee break
Chairpersons: Niels Schuetze, Francesca Pianosi
Artificial Intelligence for water resources - Digital Twins, weather forecasting, water quality modelling
10:45–10:55
|
EGU24-13702
|
ECS
|
On-site presentation
Enrique Orozco Lopez and David Kaplan

Transformer Neural Networks (TNNs) have caused a paradigm shift in deep learning domains like natural language processing and gathered immense interest due to their versatility in other fields such as time series forecasting (TSF). Most current TSF applications of TNNs use only historic observations to predict future events, ignoring information available in weather forecasts to inform better predictions, and with little attention given to the interpretability of the model’s use of environmental input factors. This work explores the potential for TNNs to perform TSF across multiple environmental variables (streamflow, stage, water temperature, and salinity) in two ecologically important regions: the Peace River watershed (Florida) and the northern Gulf of Mexico (Louisiana). The TNN was tested (and uncertainty quantified) for each response variable from one- to fourteen-day-ahead forecasts using past observations and spatially distributed weather forecasts. Additionally, a sensitivity analysis was performed on the trained TNNs’ attention weights to identify the relative influence of each input variable on each response variable’s prediction. Overall model performance ranged from good to very good (0.78<NSE<0.99 for all variables and forecast horizons). Through the sensitivity analysis, we found that the TNN was able to learn the physical patterns behind the data, adapt the use of the input variables to each forecast, and increasingly use weather forecast information as forecasting windows increased. The TNN excellent performance and flexibility, along with the intuitive interpretability highlighting the logic behind the models’ forecasting decision-making process, provide evidence for the applicability of this architecture to other TSF variables and locations.

How to cite: Orozco Lopez, E. and Kaplan, D.: Interpretable Transformer Neural Network Prediction of Diverse Environmental Time Series Using Weather Forecasts, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13702, https://doi.org/10.5194/egusphere-egu24-13702, 2024.

10:55–11:05
|
EGU24-12131
|
On-site presentation
Matteo Giuliani, Francesco Bosso, Claudia Bertini, Dimitri Solomatine, and Schalk Jan van Andel

Droughts are one of the most dangerous natural hazards affecting today societies, with an economic impact amounting to over 9 billion euros per year in Europe. Drought events usually originate from a precipitation deficit, which can then cause water shortages, agricultural losses, and environmental degradation. Despite the numerous efforts and recent advances in extreme events forecasting, the sub-seasonal time scale still represents a challenging lead time for state-of-the-art hydroclimatic predictions. In this case, the reference period is short enough for the atmosphere to retain a memory of its initial conditions, but also long enough for oceanic variability to affect atmospheric circulation. However, the relative contribution of climate teleconnections and local atmospheric conditions to the genesis of total precipitation at sub-seasonal scale remains unclear.

In this work, we aim to address this gap by using Machine Learning (ML) to combine the information extracted from teleconnection patterns, global climate variables, and local atmospheric conditions to produce sub-seasonal drought forecasts. Specifically, we implemented a first ML pipeline that uses correlation maps to select relevant grids of global Sea Surface Temperature, Mean Sea Level Pressure, and geopotential height at 500 hPa from the ERA5 reanalysis dataset, which are spatially aggregated via Principal Component Analysis and combined with a set of local variables in the considered region. The second ML approach extends our analysis by explicitly considering the potential role of teleconnection patterns, including North Atlantic Oscillation, Scandinavian oscillation, East Atlantic oscillation, and El Niño Southern Oscillation, to identify different forecast models - in terms of both input variables and model parameters – for the different phases of the climate oscillations and for each month of the year. The resulting combination of global and local variables is then used as input in different ML models, including both feedforward neural networks and extreme learning machines.

Our framework is developed within the CLImate INTelligence (CLINT) project and tested in the task of predicting the total monthly precipitation in the Rijnland area (Netherlands). The resulting ML-based forecasts are then benchmarked against state-of-the-art dynamic forecast products, i.e. the ECMWF Extended Range forecasts. Our findings indicate that combining global and local climate information into ML-based forecast models significantly improves state-of-the-art drought forecast accuracy, thus representing a promising option to timely prompt anticipatory drought management measures.

How to cite: Giuliani, M., Bosso, F., Bertini, C., Solomatine, D., and van Andel, S. J.: Leveraging climate data at different spatial scales via machine learning to improve sub-seasonal drought predictions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12131, https://doi.org/10.5194/egusphere-egu24-12131, 2024.

11:05–11:15
|
EGU24-128
|
On-site presentation
Erwan Gloaguen, Xiao Xia Liang, Maxime Claprood, and Daniel Paradis

Groundwater is and will increasingly be under threat due to many anthropic stresses like climate changes, population growth in coastal cities, pollution,... It is known that realistic 3D numerical twins of aquifers allows forecasting their groundwater flow and permits to forecast their behavior in regards to different hydrogeological changes. In this project, we built an ensemble of numerical twins of an aquifer located south-east to Montreal, Qc, Canada, using a nested geostatistical workflow in order to optimize a pump and treat plant constrain by multiple environmental indicators. The ensemble permits to obtain a quantitative measure of the uncertainty for each indicator base on the optimization of the ensemble. While these models have proved to be useful operationally speaking, any changes or scenarios that must be tested requires the managers of the resources to hire qualified companies. This prevents the long term use of the numerical twins and reduce their democratisation to the resource management. This motivates the training of a deep neural graph network on the numerical twins. The trained network is able to forecast short term changes of the groundwater flow due to new pumping rates or new pumping wells in less than a minute. 

How to cite: Gloaguen, E., Liang, X. X., Claprood, M., and Paradis, D.: Numeraical twins and deep neural network to predict groundwater flow , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-128, https://doi.org/10.5194/egusphere-egu24-128, 2024.

11:15–11:25
|
EGU24-12982
|
ECS
|
On-site presentation
Chanoknun Wannasin, Bouke Biemond, Bas Wullems, Thies Blokhuijsen, Meinard Tiessen, Fedor Baart, and Jaap Kwadijk

Estuarine saltwater intrusion poses a significant hydrological and environmental challenge in the Rhine-Meuse delta, amplified by human interventions (e.g., river engineering) and climate change impacts (e.g., droughts, storm surges, and sea-level rise). The more frequent and severe salt intrusion events, especially during droughts in 2018, 2020, and 2022, emphasize the need for accurate forecasting and effective management. This research aims to develop a digital twin for real-time forecasting and management of salt intrusion. The digital twin is part of the Virtual Delta, one of the main outputs that the SALTISolutions research project is working towards. As an operational modelling toolbox, this digital twin consists of four components: observed data analysis, real-time forecasting, early warning, and exploratory simulations. The observed data analysis component includes statistical information, such as return periods and exceedance exceedance probabilities of chloride concentration, river discharge, and water level. The forecasting component employs currently available salt intrusion models within the SALTISolutions project, including 1D, 2D, statistical, and machine learning models. The early warning component utilizes thresholds (e.g., maximum chloride concentrations at freshwater inlets). Exploratory simulations consider what-if scenarios (e.g., lower river discharge) and management options (e.g., combinations of different real-time measures). The research is ongoing, and the current development will be demonstrated. The digital twin is expected to assist water managers and stakeholders (e.g., drinking water companies) in decision-making, addressing impacts of saltwater intrusion, and ensuring a continuous supply of freshwater in the delta.

How to cite: Wannasin, C., Biemond, B., Wullems, B., Blokhuijsen, T., Tiessen, M., Baart, F., and Kwadijk, J.: Virtual Delta: a digital twin for real-time forecasting and management of saltwater intrusion in the Rhine-Meuse delta, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12982, https://doi.org/10.5194/egusphere-egu24-12982, 2024.

11:25–11:35
|
EGU24-1743
|
ECS
|
On-site presentation
María Gabriela Castrellón, Zheng Bing Wang, Dimitri Solomatine, and Ioana Popescu

Gatun Lake, located in central Panama, is the main source of freshwater for the operations of the Panama Canal. It also provides drinking water for nearly 600,000 people which is about 15% of Panama’s population. Historically, the lake has maintained a salinity well below 0.1 PSU, but since the inauguration of the Neo-Panamax locks in 2016, salinity in the lake has rapidly increased. This is a concern not only for drinking water supply and human health but for biodiversity as well. Accurately modelling salinity concentration and transport mechanisms in Gatun Lake is crucial for understanding its response to climate change and anthropogenic activities and to design effective management and mitigation strategies. However, current modelling techniques, both process-based (PB) and data-driven (DD), are limited in their ability to provide fast and accurate results with small amounts of data, especially in situations when boundary conditions are unknown or uncertain. Therefore, hybrid models have emerged as a potential solution to bridge the gap between these two approaches. The work presented here explores hybrid modelling approaches to estimate the magnitude of saltwater intrusion through the Neo-Panamax locks and calculate a salinity mass balance for Gatun Lake. Understanding these processes is useful for personnel at the Panama Canal Authority as it will inform their decision-making regarding management of Gatun Lake’s water quality and quantity.

How to cite: Castrellón, M. G., Wang, Z. B., Solomatine, D., and Popescu, I.: Hybrid approaches to model salinity in Gatun Lake, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1743, https://doi.org/10.5194/egusphere-egu24-1743, 2024.

11:35–11:45
|
EGU24-21973
|
On-site presentation
Wenchuang Zhang, Eoghan Clifford, and Páraic Ryan

Influent flow volumes play a crucial role in the management and operation of wastewater treatment plants (WWTP). In regions like Europe, characterized by abundant rainfall and aging wastewater systems, it is common to discharge stormwater into the sewerage system, i.e.  the use of combined sewer systems in urban areas, employing a single pipe for transporting both rainfall and wastewater. Consequently, large amounts of rainfall can cause WWTP to become overloaded, which can lead to wastewater overflows i.e. discharge of untreated wastewater into receiving water bodies. This can have a significant impact on the surrounding environment, damaging ecological systems hindering the use of public amenities such as beaches. Recently researchers have used machining learning methods to predict WWTP influent volumes to help manage these issues. There are however a number of outstanding challenges such as a lack of parameter selection processes, high likelihood of incorporating noise information with the intervention of more input data and the existence of model uncertainty.  This study aims to use Back Propagation Neural Network (BPNN) combined with a sensitivity analysis to help address these existing shortcomings, and provide a short-term forecasting tool which seeks to provide WWTP managers and operators ban eastly warning of potential overflow events. Gaussian Process Regression (GPR) has also been applied herein to provide probabilistic estimates of predictions. This provides information on the level of confidence associated with the predicted values by assessing the uncertainty of the model.

In this research, the input variables from five aspects have been considered: i) an energy-water balance model; ii) infiltration; iii) the effect of seasonal variation; iv) the influence of changes in tidal and river level; v) lag effects. Different combinations of input variables were used using recorded weather data and wastewater influent data for a WWTP in Ireland. In Group 1, daily precipitation, previous daily precipitation, max air temperature, soil moisture deficit, groundwater level and seasonal index have been used to predict WWTP influent, with a resulting   value of 0.68. Group 2 considers daily precipitation, previous daily precipitation, soil moisture deficit, tidal level and river level, with an  of 0.79. Then we added previous daily influent into group 1 and 2, respectively, having the same result, with the value of  equalling to 0.88. These results provide new insights for timely warning of influent variations and potential overflow, improve the practicality of machine learning in WWTP.

How to cite: Zhang, W., Clifford, E., and Ryan, P.: Back Propagation (BP) Neural Network for a Short-term Forecasting Tool in Wastewater Treatment Plant Influent, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-21973, https://doi.org/10.5194/egusphere-egu24-21973, 2024.

11:45–11:55
|
EGU24-6762
|
ECS
|
Virtual presentation
Farkhondeh Khorashadi Zadeh and MohammadReza Fatemi

Chlorophyll-a is an essential component for assessing nutrient content in water resources. Its concentration is influenced by various parameters, including Total Phosphorus (TP), Total Nitrogen (TN), Turbidity (TB), Total Suspended Solids (TSS), temperature, pH and so on. Accurate estimation of chlorophyll-a concentration across different spatial and temporal variations is crucial for assessing the condition of surface water bodies, concerning bacterial and nutrient levels. High chlorophyll-a concentration may compromise aquatic animal health, leading to disease due to increased bacterial concentrations in water.

This study aims to develop an estimation model for chlorophyll-a concentration by integrating artificial intelligence models, remote sensing data and field data. The study area includes Gorgan Bay and its contributing rivers. Initially, field data, including water quality parameters, from the water bodies and nearby rivers is analyzed. In addition to field data, remote sensing data, including chlorophyll-a concentration in the Bay, is obtained from the MODIS satellite sensors. As an artificial intelligence technique, the Random Forest (RF) is selected. The input data of the RF model are, therefore, the climate data, water quality data of the incoming rivers and the Bay and the flow of the incoming rivers. The model output is the chlorophyll-a concentration in the Gorgan Bay. The performance of the model is evaluated using different statistical measures. The different techniques are applied to find the most influential input variables for simulating the chlorophyll-a concentration in the Bay. The developed model is capable of predicting chlorophyll-a concentration, supporting improved water quality management of reservoirs (like bays). It can be utilized in locating optimal natural fish farming areas, managing, preserving aquatic ecosystems and enhancing reservoirs water quality.

Key words: Chlorophyll-a concentration, Artificial intelligence, Random forest, Remote sensing data, Field data.

How to cite: Khorashadi Zadeh, F. and Fatemi, M.: Integration of artificial intelligence techniques, remote sensing data and field data for simulation of chlorophyll-a concentration in Gorgan Bay, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6762, https://doi.org/10.5194/egusphere-egu24-6762, 2024.

11:55–12:05
|
EGU24-7003
|
ECS
|
On-site presentation
Claudia Corona and Terri Hogue

As climate change continues to impact stream systems worldwide, water temperature is an increasingly important indicator of distribution patterns and mortality rates among fish, amphibians, and macroinvertebrates. Technological advances tracing back to the 1960s have improved our ability to measure stream water temperature (SWT) at varying spatial and temporal resolutions, for the fundamental goal of better understanding stream function and ensuring ecosystem health. Despite significant advances, there continue to be a large number of stream reaches, stream segments and entire catchments that are difficult to access for a myriad of reasons, including but not limited to physical limitations. Moreover, there are noted access issues, financial constraints, and temporal and spatial inconsistencies or failures within situ instrumentation.

Over the last few decades and in response to these limitations, statistical methods and physically-based computer models have been steadily employed to examine SWT dynamics and controls. Most recently, the use of artificial intelligence, specifically machine learning (M.L.) algorithms, has garnered significant attention and utility in hydrologic sciences, specifically as a novel tool to learn yet-to-be-discovered patterns from complex data and attempt to fill data streams and knowledge gaps. Our review of publications focusing on stream water temperature modeling and prediction identified a similar number (~26) of publications utilizing M.L. in the previous four years (2020-2023), as were published in the previous 19 years, (2000-2019).  

The objective of this work is three-fold: first, to provide a concise review of the utilization of M.L. algorithms in stream water temperature modeling and prediction. Second, to review M.L. performance evaluation metrics as it pertains to SWT modeling and prediction and identify the commonly-used metrics as well as suggest guidelines for easier comparison of M.L. performance across SWT studies. Finally, we examine where progress has been made in our understanding of the physical system from the use of M.L. in SWT modeling and prediction, and identify where progress is still needed to advance our understanding of spatial and temporal patterns of stream water temperature.

How to cite: Corona, C. and Hogue, T.: A Critical Look at Machine Learning Algorithms in River/Stream Water Temperature Modeling , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7003, https://doi.org/10.5194/egusphere-egu24-7003, 2024.

12:05–12:15
|
EGU24-6548
|
ECS
|
Virtual presentation
Luisa Vieira Lucchese, Rangel Daroya, Travis Simmons, Punwath Prum, Subhransu Maji, Tamlin Pavelsky, Colin Gleason, and John Gardner

Harmonized Landsat Sentinel-2 (HLS) provides high-quality images every 2-3 days across Earth. However, HLS has not been widely used to measure Suspended Sediment Concentration (SSC) in rivers. Here, we used HLS to generate a fully open-source, open-architecture, and scalable image processing workflow and Neural Network algorithm to estimate SSC in global rivers. The extracted HLS surface reflectance was joined with global in-situ SSC measurements and used to train an ensemble of Artificial Neural Networks (ANN). Two ANNs were developed: one trained based on the lower SSC values (up to 20.08 mg/L) and the other one trained based on higher SSC values (up to 403.43 mg/L). The ANNs were able to achieve satisfactory performances for a global SSC model, with a median absolute error of 5.10 mg/L, pairwise correlation of 0.457, absolute E90 of 46.85 mg/L and absolute E95 of 84.9 mg/L. The preprocessing module and the ANN models were optimized to have few dependencies and finish execution within a reasonable timeframe (the ANN models are executed in approximately 1 second per node). These characteristics make the model suitable for implementation on Amazon Web Services (AWS) cloud, where they are planned to automatically generate SSC data on-the-fly. We will combine the global SSC model with Surface Water and Ocean Topography (SWOT) discharge data to generate a self-updating, global sediment flux dataset to be made available in the National Aeronautics and Space Administration (NASA) PO.DAAC portal.

How to cite: Vieira Lucchese, L., Daroya, R., Simmons, T., Prum, P., Maji, S., Pavelsky, T., Gleason, C., and Gardner, J.: Modeling suspended sediment concentration using artificial neural networks, an effort towards global sediment flux observations in rivers from space, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6548, https://doi.org/10.5194/egusphere-egu24-6548, 2024.

12:15–12:25

Orals: Wed, 17 Apr | Room 3.16/17

Chairpersons: Niels Schuetze, Alessandro Amaranto
Decision Support Systems and new datasets, tools and methods for time series analysis
08:30–08:40
|
EGU24-2599
|
ECS
|
On-site presentation
Andreja Jonoski, Muhammad Haris Ali, Claudia Bertini, Ioana Popescu, and Schalk Jan van Andel

The escalating urgency and severity of climate change (CC) consequences are intensifying the importance of science-informed decision making. Despite the rapid advancements in climate research, directed towards finding solutions for society, there persists a notable gap between the knowledge generated by scientists and its application by resource managers, policymakers, and other decision-makers. The observed gap is partly attributed to communication and mismatch between how researchers formulate scientific information and how stakeholders perceive its usability and legitimacy.

Within the European Union H2020 project 'EIFFEL’ (www.eiffel4climate.eu), the impact of CC on the surface-subsurface hydrology of the Aa of Weerijs catchment, situated between Belgium and the Netherlands, has been modelled and analysed, and nature-based (Nb) adaptive strategies have been developed, specifically targeting drought conditions and water shortages during summer. Recognizing the above-mentioned gap, a web application has been designed to support participatory planning and dissemination of results. The application enables stakeholders to visualize the potential impact of CC on drought conditions in near future (2050) and assess the potential of adaptive strategies to cope with such CC threat. This assessment is carried out by making use of drought-related Key Performance Indicators (KPIs), developed in consultation with the main stakeholders in the area. The underlying principle is that the adaptive strategies are co-created in a transparent, multi-stakeholder and participatory context, streamlining their implementation in landscape planning.

The web app has a 3-tier architecture and ingests the pre-processed output of physically based, fully distributed hydrological model developed using the MIKE-SHE modelling system of DHI, Denmark. In the first (presentation) tier, each tab on the top-level navigation bar leads to interactive user interfaces with embedded maps, designed with careful consideration of human-computer interactions (HCI) and user experience (UX) principles using standard web technologies, such as HTML, CSS, and JavaScript. The second (logic) tier contains the web server and dedicated map server for providing spatial data to the map interface. Python has been used to handle dynamic request by the user for display of data on presentation tier. The third (data) tier contains pre-processed model outputs that are displayed on the front tier through the logic tier.

For testing and validation of technical performance of the web app, a first demonstration and testing workshop was held in November 2023 with water expertise stakeholders. The web app successfully guided users through the storyline towards the research findings. Overall the work was appreciated and encouraged by positive feedbacks. Future workshops are planned with broader group of stakeholders, which will hopefully further validate its value as a support tool for assessing climate adaptation strategies, jointly used by domain scientists, modellers and stakeholders.

How to cite: Jonoski, A., Ali, M. H., Bertini, C., Popescu, I., and van Andel, S. J.: Web application for supporting assessment of climate change adaptation strategies in Aa of Weerijs catchment, the Netherlands, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2599, https://doi.org/10.5194/egusphere-egu24-2599, 2024.

08:40–08:50
|
EGU24-1506
|
On-site presentation
astria nugrahany, Rasyid Muhammad Amirul Muttaqin, and fahmi hidayat

Perusahaan Umum Jasa Tirta I (PJT-I) is a State-Owned Enterprise in Indonesia that focuses on Water Resources Management. Currently, PJT-I manages and utilizes water resources in five river basins under the jurisdiction of the Public Works and Housing Ministry. In conducting its business processes, PJT-I is highly influenced by hydroclimatic conditions, water availability, river basin conditions, and water quality. Therefore, the operational activities of PJT-I heavily rely on accurate data and information related to water resources. It is crucial for PJT-I to have the capability to access, analyze, and manage accurate and timely information to enhance the efficiency of water resource management. Hence, the company has implemented the AQUARIUS software as a decision support tool. The four main modules of AQUARIUS—Connect, Time-Series, Forecast, and WebPortal—are integrated into PJT-I to provide timely and reliable data for decision-makers. The AQ Connect module is responsible for managing the automatic extraction of time-series data from external sources. Meanwhile, the AQ Time-Series system is the core of AQUARIUS, serving as the primary platform for managing water resource data, including quantity and quality data, as well as meteorological information and other sensors. This system includes components such as the Time-Series Server, Database, Springboard, and Tools that work together to handle time-series data corrections without affecting raw data, the development of rating curves, derivation, and automatic data computation, as well as the production workflow and data publication from various data sources. AQUARIUS Forecast, as a flexible modeling environment, is specifically designed for river system modeling, processing, and time-series simulation. This module can incorporate complex operational rules into the model, replicating specific operational requirements such as reservoir rules, environmental releases, and water allocation. Finally, AQUARIUS WebPortal is a browser-based information and data presentation system that integrates various aspects of data collection, data storage, reporting, data computation, and data management, providing an efficient real-time information display. With this implementation, PJT-I can be more effective and efficient in managing water resources, enhancing the overall performance of the company. The implementation of AQUARIUS currently provides real-time information on 16 parameters of water resource from a total of 324 sensors and observations managed by PJT-I. Graphical customization of all observation data and its derivatives serves as a decision support tool. Real-time alert notifications from forecasts and monitoring of parameter conditions prepare PJT-I to handle disaster actions. Thus, the implementation of AQUARIUS has improved the effectiveness and efficiency of water resource management by PJT-I overall.

How to cite: nugrahany, A., Amirul Muttaqin, R. M., and hidayat, F.: Utilizing AQUARIUS as Water Resource Management Software: A Decision Support Tool in the Operational Area of Jasa Tirta I State-Owned Enterprise, Indonesia, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1506, https://doi.org/10.5194/egusphere-egu24-1506, 2024.

08:50–09:00
|
EGU24-555
|
ECS
|
On-site presentation
Robert Reinecke, Francesca Pianosi, and Thorsten Wagener

We are in a simultaneous state of exuberance and starvation of Earth system data. Model ensembles of increasing complexity provide petabytes of output, while remote sensing products offer terabytes of new data every day. On the other hand, we still lack data on some key processes that are more challenging to observe, like groundwater recharge, or only from particular regions of the world (often regions already heavily impacted by anthropogenic change). This leaves us with highly imbalanced datasets. Our ability to produce and collect mountains of data contrasts with our progress in improving scientific process understanding. How can we harness model simulations and data alike to enhance our knowledge and test scientific hypotheses about process relationships despite data gaps and poorly known biases in modelled and observational datasets? Our talk discusses methods to approach this problem while being agnostic to the data source (model simulations or observations). We present a new approach to interrogate a given dataset and identify correlational and possibly causal relationships between its variables. We test the method on an ensemble of complex global hydrological model simulations and observations from the ISIMIP experiments, and demonstrate its usefulness and limitations. We show that our approach can provide powerful insights into dominant process controls while scaling with large amounts of data.

How to cite: Reinecke, R., Pianosi, F., and Wagener, T.: Towards understanding dominant controls on Earth system processes across observations and models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-555, https://doi.org/10.5194/egusphere-egu24-555, 2024.

09:00–09:10
|
EGU24-18400
|
ECS
|
On-site presentation
Mario Alberto Ponce Pacheco, Soham Adla, Ramesh Guntha, Aiswarya Aravindakshan, Maya Presannakumar, Ashray Tyagi, Anukool Nagi, Prashant Pastore, and Saket Pande

Smallholder farmers are critical to global food production and natural resource management. Due to increased competition for water resources and variability in rainfall due to climate change, chronic irrigation water scarcity is rising particularly in drought-prone regions like Vidarbha, Maharashtra (India). Improving irrigation water efficiency is critical to sustainable agricultural intensification; however, adopting a new technology represents a certain level of risk for the farmers, who invest time and economic resources in changing their practices. Because of the uncertainties in the rainfall (monsoon onset and dry spells), in addition to the upcoming global change, the expected yield is unsure and variable; so, a paradigm comparison between different irrigation technologies is not clear.  We have developed software that allows farmers to make decisions in real time based on the implemented practices (fertilisation and irrigation) in their crops, mainly cotton. By implementing a socio-hydrological dynamic model, the software provides a risk forecast of the yield and profit the user can expect at the end of the season under the current practices; in addition, the software computes the forecast of the production under a provided best practices scenario, so the users can compare and improve their practices. The model also considers state variables the water storage water and biomass production, providing an understanding of the impact of the executed practices in the natural resources. Finally, we implemented a kernel principal component analysis (KPCA) to consider the impact of socioeconomic factors on the yearly outcome, based on previous surveys performed in the area. We’ve focused on object-oriented programming (OOP) approach in order to optimise the management of the information. The app not only processes social and agricultural information provided by the user but also retrieves and continually updates climate datasets from the web, as well as market prices. The farmers can request the execution of the social-hydrological model to our servers from their own mobile devices, helping to the adoption of technologies. By following an agile methodology, the mobile app has been tested with farmers in order to get feedback from real users; this brought the opportunity to redesign the functionality based on the correct understanding of information and, a fast and clear management of the tool. In addition, this software represents a useful tool to capture and follow information about the use of water by farmers.

How to cite: Ponce Pacheco, M. A., Adla, S., Guntha, R., Aravindakshan, A., Presannakumar, M., Tyagi, A., Nagi, A., Pastore, P., and Pande, S.: Developing a mobile app for adopting efficient irrigation technologies for cotton production in India, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18400, https://doi.org/10.5194/egusphere-egu24-18400, 2024.

09:10–09:20
|
EGU24-1477
|
ECS
|
On-site presentation
Ashkan Hassanzadeh, Enric Vázquez-Suñé, and Sonia Valdivielso

This research introduces WaterpyBal, a dynamic tool tailored for spatiotemporal modeling of water balance, emphasizing diffuse precipitation and recharge dynamics. WaterpyBal demonstrates adaptability through its incorporation of diverse input datasets and flexible temporal intervals for model simulations. The tool seamlessly integrates critical stages of water balance assessment, encompassing data interpolation, evapotranspiration, and infiltration computations, accounting for soil attributes and urban hydrological intricacies. It delivers comprehensive water budget parameters, including recharge, deficit, and runoff, accompanied by the generation of informative maps, datasheets, and raster archives.

WaterpyBal modular architecture establishes a versatile foundation, positioning it as an open-source Python library exclusively dedicated to Soil Water Balance computations. Supporting temporal variations at hourly, daily, and monthly scales, WaterpyBal accommodates a spectrum of data formats, consolidating disparate stages of SWB calculations into an integrated library. Leveraging the NetCDF data format, widely recognized in scientific toolsets, WaterpyBal streamlines the workflow from spatial interpolation to result visualization, enhancing its applicability across diverse environmental investigations.

User engagement is facilitated through the WaterpyBal Studio, an intuitive graphical interface that guides users through each stage of the modelling process. WaterpyBal Studio has a user-centric design and is efficient in supporting sustainable groundwater management initiatives.

In summary, WaterpyBal and WaterpyBal Studio emerge as an inclusive solution for water balance modelling, embodying an integrated approach and adhering to the principles of open-source methodologies in environmental research.

How to cite: Hassanzadeh, A., Vázquez-Suñé, E., and Valdivielso, S.: Enhancing Reproducibility in Soil Water Balance Environmental Studies: The WaterpyBal Framework, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1477, https://doi.org/10.5194/egusphere-egu24-1477, 2024.

09:20–09:30
|
EGU24-19450
|
ECS
|
On-site presentation
Christophe Dessers, Pierre Archambeau, Benjamin Dewals, Sébastien Erpicum, and Michel Pirotton

In hydrology, a plethora of modelling approaches exist. They differ in several aspects, including the underlying hypotheses (empirical, conceptual vs. physically based) and the spatial discretisation (lumped, semi-distributed, gridded). The advent of machine learning or AI techniques further expands the spectrum of available modelling options. In this context, there is a growing scientific interest in systematically comparing existing models, understanding the reasons behind their relative performance, and applying multi-model approaches to increase the reliability of the outcomes. However, few research has been carried out so far to compare all these types of models in the same framework, namely using similar input data, pre-processing procedures, parameter optimisation algorithms and strategy, objective function, etc.

 

WOLFHydro, developed by the HECE group at the University of Liège, offers such a framework. It addresses a flexible simulation tool organised in ‘modules’ and capable of representing any catchment, thus keeping a tuneable level of complexity and details in the description of all the physical processes at work, while remaining in the same modelling environment and starting from exactly the same input data. The software parcels out a catchment into sub-catchments or evaluation points, which are arranged in a topology network. Each module can contain a chosen type of model (physically based, conceptual, empirical) with the desired spatial representation (lumped, gridded, semi-distributed) to be assembled and facilitate the creation of hybrid models. The software also accommodates anthropogenic structures, such as dams, storage basins, or any other hydraulic structure defined by a set of operation rules customable by the user.

 

The software currently contains a number of models developed in-house, as well as widely accepted ones (GR4H, VHM, etc). They have been validated and tested on several Belgian catchments, in particular for the 2021 extreme floods in the Vesdre and Amblève valleys. The software also features post-processing tools and a GUI interface to facilitate inspection of the results.

 

Thanks to its versatility, WOLFHydro aims at reducing biases in model comparisons conducted in separated frameworks, improving our understanding of dominant hydrological processes, improving the evaluation of the influence model structure complexity, and carry out ensemble modelling.

How to cite: Dessers, C., Archambeau, P., Dewals, B., Erpicum, S., and Pirotton, M.: WOLFHydro: a modular framework for multi-model hydrological simulations, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19450, https://doi.org/10.5194/egusphere-egu24-19450, 2024.

09:30–09:40
|
EGU24-8410
|
ECS
|
On-site presentation
Elżbieta Lasota, Julius Polz, Timo Houben, Lennart Schmidt, David Schäfer, Jan Bumberger, and Christian Chwala

Efficient quality control (QC) of time series data from environmental sensors is crucial for ensuring data accuracy and reliability. In this work, we turn to machine learning, specifically Graph Neural Networks (GNN), to elevate QC efficiency for large datasets originating from sparsely distributed sensors. Our proposed model, specifically tailored for anomaly detection as a vital aspect of QC, combines graph convolution (GC) and Long Short-Term Memory (LSTM) layers to capture both spatial dependencies and temporal patterns in the time series data. The focus on anomaly detection enables the identification of deviations or irregularities in the signal, providing insights into important events, faults, or disturbances within the data.

We conducted experiments using two distinct types of labeled data: three months of data in 2019 from 20 Commercial Microwave Links (CML) distributed around Germany and a 2.5-year period (June 2014 to December 2016) of soil moisture data from the TERENO SoilNet network in Hohes Holz, Germany. These datasets, encompassing an impressive 2.5 million samples, pose challenges in QC due to diverse dynamics, signal anomalies, and variations in temporal resolution and spatial densities of observations. 

The classification results demonstrated satisfactory performance, with Matthews Correlation Coefficients of over 0.6 and 0.8 for the CML and SoilNet datasets, respectively. To evaluate the added value of processing the spatial information provided by neighboring sensors, we also compared the results of our final GNN with a baseline model that uses the same LSTM layers but disregards the GC layer, which integrates the neighboring information. The GNN model exhibited improved performance, as evidenced by 5-fold cross-validation mean Area Under the Receiver Operating Characteristic Curve (AUC) values of 0.934 and 0.971 for the CML and SoilNet data, respectively. In contrast, the baseline model yielded mean AUC values of 0.877 and 0.950, highlighting the effectiveness of incorporating the information from neighboring sensors via the GC layers to enhance anomaly detection for environmental sensor time series data.

How to cite: Lasota, E., Polz, J., Houben, T., Schmidt, L., Schäfer, D., Bumberger, J., and Chwala, C.: Leveraging the Power of Graph Neural Networks in Environmental Time Series Anomaly Detection , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8410, https://doi.org/10.5194/egusphere-egu24-8410, 2024.

09:40–09:50
|
EGU24-14801
|
ECS
|
On-site presentation
Wenzhuo Wang, Zengchuan Dong, and Li Ren

Periods detected by an advanced, systematic technology can provide reliable basis for water resources prediction and management. One of the main challenges of period mining is getting rid of the effects of climate change and noise. This study presents the hierarchical discrete-continuous wavelet decomposition (HDCWD) model. The method provides a three-layer identification framework of detrending, denoising and mining by combining discrete wavelet transform and continuous wavelet transform. The dominating periods and their spatiotemporal features of precipitation in the Yellow River Basin are identified by applying HDCWD to different catchments. Results show the following: (1) Noise exists in the precipitation series in the Yellow River Basin and leads to overlooked period. (2) Precipitation in the Yellow River Basin has the dominating periods of 2–4 years and 7–9 years from 1956 to 1984, and period of 2 years from 1998 to 2002. (3) The periodicity of precipitation in the Yellow River Basin varies among different catchments that the ones in higher latitude exhibit a longer period and those in the lower east exhibit a shorter period.

How to cite: Wang, W., Dong, Z., and Ren, L.: Spatiotemporal variations of the precipitation in the Yellow River Basin using a novel hierarchical discrete-continuous wavelet decomposition model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14801, https://doi.org/10.5194/egusphere-egu24-14801, 2024.

09:50–10:00
|
EGU24-8191
|
ECS
|
On-site presentation
Vitali Diaz, Peter van Oosterom, Martijn Meijers, Edward Verbree, Nauman Ahmed, and Thijs van Lankveld

The advantages of using point clouds for change detection analysis include comprehensive spatial and temporal representation, as well as high precision and accuracy in the calculations. These benefits make point clouds a powerful data type for spatio-temporal analysis. Nevertheless, most current change detection methods have been specifically designed and utilized for raster data. This research aims to identify the most suitable cloud-to-cloud (c2c) distance calculation algorithm for further implementation in change detection for spatio-temporal point clouds. Eight different methods, varying in complexity and execution time, are compared without converting the point cloud data into rasters. Hourly point cloud data from monitoring a beach-dune system's dynamics is used to carry out the comparison. The c2c distance methods are (1) the nearest neighbor, (2) least squares plane, (3) linear interpolation, (4) quadratic (height function), (5) 2.5D triangulation, (6) natural neighbor interpolation (NNI), (7) inverse distance weight (IDW) and (8) multiscale model to model cloud comparison (M3C2). We evaluate these algorithms, considering both the accuracy of the calculated distance and the execution time. The results can be valuable for analyzing and monitoring the (build) environment with spatio-temporal point cloud data.

Key terms: point cloud, spatio-temporal analysis, c2c distance, beach-dune system

References

Van Oosterom, P., van Oosterom, S., Liu, H., Thompson, R., Meijers, M. and Verbree, E. Organizing and visualizing point clouds with continuous levels of detail. ISPRS J. Photogramm. Remote Sens. 194 (2022) 119. https://doi.org/10.1016/J.ISPRSJPRS.2022.10.004

Vos, S., Anders, K., Kuschnerus, M., Lindenbergh, R., Höfle, B., Aarninkhof, S. and de Vries, S. A high-resolution 4D terrestrial laser scan dataset of the Kijkduin beach-dune system, The Netherlands. Sci Data 9, 191 (2022). https://doi-org.tudelft.idm.oclc.org/10.1038/s41597-022-01291-9

How to cite: Diaz, V., van Oosterom, P., Meijers, M., Verbree, E., Ahmed, N., and van Lankveld, T.: Comparison of cloud-to-cloud distance calculation methods for change detection in spatio-temporal point clouds, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8191, https://doi.org/10.5194/egusphere-egu24-8191, 2024.

10:00–10:10

Posters on site: Tue, 16 Apr, 16:15–18:00 | Hall A

Display time: Tue, 16 Apr 14:00–Tue, 16 Apr 18:00
A.28
|
EGU24-3531
|
ECS
|
Babak Mohammadi, Hongkai Gao, Petter Pilesjö, Ye Tuo, and Zheng Duan

The complex interaction among meteorological, glaciological, and hydrological variables presents challenges for glacio-hydrological modeling, necessitating advanced methodologies to capture the intertwined system. This study aimed to combine the traditional process-based hydrological model and data-driven techniques to enhance hydrological predictions in glacierized catchments. One glacierized catchment in northern Sweden was used as the testing site. We used a process-based glacio-hydrological model (FLEXG) and a machine learning approach (the M5Tree model) to assess and enhance the predictive capabilities of hydrological simulations. A suite of meteorological variables, such as air temperature, precipitation, evapotranspiration, relative humidity, sunshine hours, solar radiation, and wind speed, in combination with glacio-hydrological outputs from the FLEXG model, including snow cover area, snow water equivalent, and glacier mass balance, were used as inputs to the M5Tree model. Nine distinct scenarios were examined to explore the individual and cumulative impacts of these variables on the accuracy of runoff simulation. We started with the first scenario (named as M5Tree1) in which all meteorological and glacio-hydrological variables were used; this scenario serves as a benchmark for comparison against the other scenarios. Sequentially, each scenario omitted one variable to elucidate its specific contribution to runoff modeling. The final scenario (M5Tree9) used only air temperature and precipitation as inputs, reflecting their fundamental role in hydrological processes. The Variable Mode Decomposition (VMD), as a signal decomposition technique, was employed to enhance runoff modelling accuracy. This technique facilitated the dissection of each meteorological and glacio-hydrological variable into five distinct sub-signals, offering a more nuanced understanding of their contributions to runoff dynamics. Subsequently, the scenarios were re-evaluated with inputs derived from the VMD-decomposed variables (VMD-M5Tree1 to VMD-M5Tree9). The results showed remarkable improvements in the accuracy of runoff simulation with the incorporation of VMD. Our study demonstrated the significance of meticulous variable selection and decomposition techniques (particularly VMD) in improving model accuracy. We identified the optimal combination of meteorological and glacio-hydrological variables for robust runoff simulation. This study explored a singular approach among various methods to integrate traditional models and machine learning techniques for combining their respective strengths. Future research could explore other different ways in combining traditional models and machine learning techniques to improve runoff simulation. Additionally, given the vulnerability of glacierized catchments to climate change, future studies should incorporate future climate projections to assess the adaptability of the proposed integrated modelling framework and to understand the impact of climate change on runoff in cold regions.

How to cite: Mohammadi, B., Gao, H., Pilesjö, P., Tuo, Y., and Duan, Z.: Integration of hydrological models with data-driven techniques in cold regions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3531, https://doi.org/10.5194/egusphere-egu24-3531, 2024.

A.29
|
EGU24-4177
|
ECS
Yiming Yin, Ross Woods, and Rafael Rosolem

The accurate estimation of catchment response time is crucial for understanding hydrological processes and making informed water resource management decisions. In the rainfall-runoff model with linear reservoir equation, the time parameter, Tc, represents the time delay of catchment routing. Recent studies have shown that the Detrending Moving Cross-Correlation Analysis (DMCA) method can provide a more practical and direct determination of catchment response time, surpassing the limitations of the linear reservoir model.

This study aims to investigate whether the catchment response time obtained through the DMCA method can be effectively utilized as a replacement for the catchment timescale in the rainfall-runoff model with linear reservoir equation. The DMCA method is employed to analyze hydrological time series data from the CAMELS GB (Catchment Attributes and Meteorology for Large-sample Studies in Great Britain) hourly time series data. The calculated catchment response times are then compared with the catchment timescales estimated using the linear reservoir equation.

Our results indicate that the catchment response time derived from the DMCA method exhibits a closer correspondence with the catchment timescale estimated by the linear reservoir equation.

In conclusion, this study demonstrates the potential of replacing the catchment timescales in the rainfall-runoff model with linear reservoir equation with catchment response times obtained through the DMCA method. This substitution leads to more robust and accurate hydrological modeling, particularly in ungauged catchments or catchments with limited timeseries data.. The research findings indicate that the DMCA method offers a valuable tool for hydrologists and water resource managers seeking to enhance their understanding and management of catchment dynamics. However, further research is warranted to explore its application across catchments from different regions.

How to cite: Yin, Y., Woods, R., and Rosolem, R.: Replacing Catchment Timescale (Tc) in the rainfall-runoff model based on the linear reservoir equation with Catchment Response Time Obtained via Detrending Moving Cross-Correlation Analysis (DMCA) Method , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4177, https://doi.org/10.5194/egusphere-egu24-4177, 2024.

A.30
|
EGU24-5298
|
ECS
|
Fedor Scholz, Manuel Traub, Thomas Scholten, Christiane Zarfl, and Martin Butz

Traditional hydrology simulates rainfall-runoff dynamics by means of process-
based models (PBMs), which are derived from physical laws. The models exhibit
realistic behavior. Their internal states can by directly interpreted, because they
reflect the modeled current state of the hydrological dynamics. Natural processes
in general are very complex, though, such that is is simply impossible to model
every aspect in detail. In the case of hydrology, for example, anthropological in-
fluences, such as the exact influence of the sewer system, as well as natural factors,
such as soil and rock types and structures, are extremely hard to model in all their
details. As a result, high uncertainty remains about the models’ necessary compo-
nents and their parameterizations, leaving room for improvement Sit et al. [2020],
Nearing et al. [2021]. Data-driven approaches, like deep neural networks (DNNs),
offer an alternative. They are trained on large amounts of data by gradient descent
via automatic differentiation. This enables them to automatically discover rela-
tionships in the training data, which often leads to superior performance Kratzert
et al. [2018], Shen [2018]. Due to the DNNs’ complexity, however, these rela-
tionships are hard to investigate and often fail to respect physical laws. Hybrid
modeling combines both approaches in order to benefit from their respective ad-
vantages. In this work, we present a physics-inspired, fully differentiable and fully
distributed rainfall-runoff model to predict river discharge from precipitation. Our
DNN architecture consists of a land module and a river module. The land mod-
ule receives RADOLAN-based precipitation data and propagates runoff laterally
over a regular grid (1km2 grid size) taking land surface structure information into
account. Runoff is then captured as input to the river module, which mimics the
actual river network by means of a graph neural network. Due to the involved,
physically motivated inductive biases, our model can be trained end-to-end from
the RADOLAN data as the main input and sparse discharge data as output. We
showcase our model on the Neckar river catchment in South Germany, achiev-
ing NSE values of 0.88 and 0.84 when we predict 1 and 10 days into the future,
respectively. In contrast, persistence yields NSE values of 0.5 and 0.06 for the cor-
responding forecast horizons. Due to our model’s differentiability we expect to be
able to infer the origin of measured discharge or turbidity—and thus erosion—in
the near future. We thus hope that this information could be used to create policies
that mitigate both the danger of floods and extreme erosion.

How to cite: Scholz, F., Traub, M., Scholten, T., Zarfl, C., and Butz, M.: Introducing a fully differentiable, fully distributed Rainfall-Runoff Model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5298, https://doi.org/10.5194/egusphere-egu24-5298, 2024.

A.31
|
EGU24-6256
|
ECS
Based on hydroinformatics and machine learning: Evolutionary characteristics of water quality in the Yangtze River Basin over the past two decades
(withdrawn)
Jiang Wu, Ling-Yan He, and Hui Zeng
A.32
|
EGU24-8516
|
ECS
Sabrina Lanciotti, Leonardo Alfonso, Elena Ridolfi, Fabio Russo, and Francesco Napolitano

According to the Intergovernmental Panel on Climate Change (IPCC), the variability of extreme rainfall events is increasing in many locations. The continuous expansion of urban areas makes urban flooding more common, thus increasing the need for improved management of drainage systems in large cities. Urban pluvial flooding (UPF) occurs when surface runoff cannot be efficiently conveyed into the drainage system, due to intense rainfall events exceeding the capacity of stormwater drainage systems, or due to inlets' poor maintenance which are often either partially or fully blocked. Many drainage systems may not be efficient due to outdated design approaches that do not consider these aspects. Therefore, there is a need to improve the design of structures and to prioritize risk adaptation and mitigation strategies to build resilient cities against the effects of pluvial flooding. During extreme rainfall events generating pluvial flooding, discharges exceeding the sewer system capacity are diverted by sewer overflows. For this reason, the objective of this work consists of defining a methodology to determine the optimal management strategy to mitigate sewer overflows using machine learning techniques. Here we simulate pluvial flooding within a large urban area by using the freely available Storm Water Management Model EPA-SWMM, based on a detailed reproduction of the geometric characteristics of a branch of the drainage network in a large city. By simulating the different propagation effects of synthetic hyetographs in the pipelines and artefacts, the dynamic operating conditions of the actual network are performed using machine learning techniques, by applying Python to the SWMM data model (PySWMM). The project, which is ongoing, thus aims at the optimal management of the combined overflow devices of a sewer system through their real time control during a flood event to mitigate pluvial flooding risk in urban areas.

How to cite: Lanciotti, S., Alfonso, L., Ridolfi, E., Russo, F., and Napolitano, F.: Pluvial Flooding Risk Mitigation: a machine learning approach for optimal management of an urban drainage system, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8516, https://doi.org/10.5194/egusphere-egu24-8516, 2024.

A.33
|
EGU24-9931
|
ECS
Balazs Bischof, Erwin Zehe, and Ralf Loritz

Previous studies have shown that Long-Short Term Memory networks (LSTMs) offer a large potential for data-based learning and hydrological predictions. This study focuses on exploring the largely untapped potential of such modeling approach using a large sample of in-situ soil moisture data based on Time-Domain Reflectometry (TDR) measurements, collected across the Attert experimental basins in Luxembourg. Soil moisture plays a critical role in various hydrological processes, influencing groundwater recharge, governing infiltration dynamics, and contributing significantly to the generation of overland flow. Additionally, it stands as a key determinant for the water supply essential for sustaining vegetation and agricultural crops. Here we introduce an LSTM model that has been trained on extensive long-term in-situ soil moisture observations with the objective of extrapolating the dynamics of soil moisture across spatial dimensions, temporal scales, and depths. A key challenge in this context is how to deal with multiscale variability of TDR observations, which arising from small scale variations in soil texture scales as well as larger scale spatial variability of physiographic and meteorological characteristics. Acknowledging, that this multiscale variability of soil moisture is difficult to disentangle by standard available predictors and their gradients, we place particular emphasis on data processing and understanding of such variability. This emphasis is crucial to mitigate potential confusion within the model, ensuring a more accurate representation of soil moisture dynamics. For this purpose, we evaluate the efficacy of LSTMs in capturing soil moisture dynamics, while concurrently aiming to clarify variability and address uncertainty. Furthermore, employing clustering techniques and network theory approaches, our aim is to discern systematic variability and patterns, considering model performance and the relationships within soil moisture measurement time series. As a result, we demonstrate the advantages of employing LSTMs to assess soil moisture dynamics at the catchment scale, while emphasizing the exploration of drawbacks and limitations inherent in purely data-based learning. This analysis provides a valuable guide for future modeling attempts, offering an opportunity to depict spatial and temporal variations in soil water storage. Such representations prove beneficial in the development of early-warning systems for potential dry events.

How to cite: Bischof, B., Zehe, E., and Loritz, R.: Using neural networks for predicting soil water storage based in situ soil moisture observations, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9931, https://doi.org/10.5194/egusphere-egu24-9931, 2024.

A.34
|
EGU24-11137
|
ECS
|
|
Fereshteh Taromideh, Giovanni Francesco Santonastaso, and Roberto Greco

Real-time data plays a crucial role in predicting short-duration rainfall. These predictions have a significant impact on our daily life, especially in emergencies that can imperil human safety and the environment. A tragic example of this occurred on November 26, 2022, in Casamicciola, island of Ischia, in the Gulf of Naples (Italy), when heavy rainfall triggered landslides, resulting in loss of lives and extensive damage to buildings and infrastructure. Such destructive consequences highlight the urgent need for accurate short-term rainfall forecasts.

In fact, it is very important to have reliable short-term rainfall forecasting for early warning purposes. This can help save lives and property by preventing the effects of flooding, landslides, and other hazards caused by heavy rainfall . It is hard to understand and model how rainfall changes over time, and therefore, predicting rainfall in the short term is a complex challenge. Many of the current models use complicated statistical methods that are often too expensive and time consuming. In contrast, machine-learning (ML) models can find hidden patterns in rainfall data and predict the hourly or sub hourly amount of rain with limited computational burden.

In this study, a novel ML model is developed for nowcasting rainfall, to explore how it can make effective and quick short-term forecasts of precipitation. Specifically, the random forest (RF) algorithm is used, as recent studies have found this ML approach to be suitable (Mdegela et al., 2023). The study area is located on the island of Ischia, where 4 rain gauges (Ischia, Monte Epomeo, Forio, Piano Liguori) with a temporal resolution of 10 minutes are installed. The proposed model uses both the precipitation time series of the rain gauges and surface rainfall intensity provided by radar managed by the Civil Protection agency with temporal resolution of 5 minutes and a spatial resolution of 1 km, to predict the future precipitation. Different grid sizes (20x20, 30x30, 40x40 and 50x50 km) centred on the island of Ischia are considered to select the best radar input data (features) for the RF algorithm. The datasets were randomly selected for RF model training (70% of the data) and validation (30% of the data). The Minimum Inter-arrival Time (MIT) criterion was adapted for the definition of rainfall events within the rain gauge precipitation records (Heneker et al., 2001). A rainfall event is defined as a rainfall period preceded and followed by dry periods longer than MIT.

The results indicate that the RF model provides reliable short-term precipitation forecasts using only observed values as input, making it a fast, simple, and convenient method for nowcasting. The resulting precipitation forecast has the potential to be used in an early warning system to mitigate the impact of landslides.

 

References

Heneker, T.M., Lambert, M.F., Kuczera, G., 2001. A point rainfall model for risk-based design. J. Hydrol. 247, 54–71.

Mdegela, L., Municio, E., De Bock, Y., Luhanga, E., Leo, J. and Mannens, E., 2023. Extreme Rainfall Event Classification Using Machine Learning for Kikuletwa River Floods. Water15(6), p.1021.

 

How to cite: Taromideh, F., Santonastaso, G. F., and Greco, R.: Rainfall nowcasting with machine-learning for landslide early warning , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11137, https://doi.org/10.5194/egusphere-egu24-11137, 2024.

A.35
|
EGU24-11277
Accurate monthly forecasting of Rainfall pattern in Atlantic climates: an Empirical Fourier Decomposition-based Deep ensemble learning paradigm
(withdrawn)
Mehdi Jamei and Aitazaz Ahsan Farooque
A.36
|
EGU24-13369
|
ECS
Rubén Carrillo and Diana Núñez

The numerical analysis of partial differential equations (PDEs) is a classic and highly active investigation area. In this area, there is a wide variety of established methods such as finite differences, finite elements, finite volume, spectral, etc. Furthermore, we can affirm that all of them have an analytical origin.

In recent decades, geometric methods based on Exterior Calculus have been proposed. This is due to the geometric content of many Physics theories. In this group, we can highlight the Finite Exterior Element Calculus, or FEEC, a theoretical approach to designing and understanding finite element methods for the numerical solution of various PDEs. This method is substantiated by the traditional finite element functional analysis techniques, but it is accompanied by topology and homological algebra tools.

An alternative method in this field is Discrete Exterior Calculus (DEC). Exterior Calculus generalizes vector calculus to high dimensions and differential manifolds. Discrete Exterior Calculus (DEC) is one of their discretizations, producing a numerical method for solving PDEs on simplicial complexes.

In DEC, geometric operators on simplicial complexes are used in any dimension, and equivalent discrete versions are proposed for objects and differential operators, such as differential forms, vector fields, etc. DEC is proposed as a method for solving partial differential equations that consider the geometric and analytical characteristics of the operators and the domains over which that applies.

Mathematical heat transfer models in porous media have recently received considerable attention. Second-order partial differential equations give these for heat and flow energy conservation. To study the thermal characteristics of conduction and advection within porous media, thermal equilibrium, and non-thermal equilibrium models. This work analyzes 2D numerical models of heat transport in porous aquifers with DEC.

How to cite: Carrillo, R. and Núñez, D.: Application of Discrete Exterior Calculus Method to the Heat Transport Equation in Porous Aquifers, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13369, https://doi.org/10.5194/egusphere-egu24-13369, 2024.

A.37
|
EGU24-14375
|
ECS
Jihoon Shin, YoungWoo Kim, Taeseung Park, and YoonKyung Cha

Water quality monitoring plays a crucial role in establishing effective management strategies for ensuring the safety and sustainability of water resources. Recently, with advances in sensor technologies, autonomous water quality monitoring has been increasingly used to obtain detailed temporal variations of water quality in the river network. However, irregular time series data are prevalent in multi-sensor monitoring systems and the resulting missing values limit the ability of data to serve as a decision basis. A modeling tool that can efficiently handle irregular time series data is required to derive useful insights from the sensor data. The combined use of feature engineering and attention mechanisms has shown benefits in dealing with irregular time series data from improved performance and explainability. In this study, hybrid deep learning that incorporates reverse time attention and trainable decay mechanisms (RETAIN-D) was used to analyze sensor data collected from multiple sites located in the upper section of the Geum River, South Korea. RETAIN-D was developed to predict the variations in the level of chlorophyll-a (Chl-a) concentrations and to analyze spatiotemporal associations between its influencing factors at different monitoring sites. RETAIN-D showed a high degree of accuracy (Accuracy = 0.81–0.90, AUC = 0.67–0.90, F1 score = 0.87–0.88 for test sets) for various chlorophyll-a standards. Trainable decay mechanism in RETAIN-D allowed predictions of Chl-a level in missing periods without manual feature engineering. Chl-a concentrations from the nearest adjacent tributary had high importance in predicting Chl-a levels for the target site. The contribution of input features among different time steps was generally higher in the recent time steps. These results demonstrate the usefulness of the hybrid deep learning approach as an efficient Big Data analysis tool for water quality and resource management.

How to cite: Shin, J., Kim, Y., Park, T., and Cha, Y.: Hybrid Deep Learning Approach for Simultaneous Feature Engineering and Explanation of Water Quality Sensor Data , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14375, https://doi.org/10.5194/egusphere-egu24-14375, 2024.

A.38
|
EGU24-14464
|
ECS
Natalie Lerma and Jonathan Goodall

Urban stormwater poses significant challenges related to flooding and water quality. Cities utilize 311 service request platforms for residents to report incidents, enabling governmental attention. Existing studies have explored socioeconomic and climatic factors' impact on the submission of requests, but a comprehensive understanding of how socioeconomic, climatic, and physical parameters collectively influence flooding and sanitation reporting remains unexplored. This study analyzes four years of 311 service requests for flooding stoppages and raw sewage concerns in Norfolk, VA, employing two Zero-Inflation Negative Binomial mixed effects models. These models identify statistically significant predictors for flooding stoppage and raw sewage concern request counts. The 311 service request data was geo-aggregated to the census tract level and temporally aggregated into weekly sums to account for possible lags in event occurrence and reporting of flooding stoppage or raw sewage. Duplicate service requests were also removed from the dataset. This study incorporates a wide range of explanatory variables, including socioeconomic factors (race, income, gender, education level), climatic factors (precipitation, tide level, groundwater level), and physical factors (topographic wetness index, impervious cover, distance from water bodies). All explanatory variables were aggregated to census tract level and weekly values where appropriate. Results reveal precipitation as a robust predictor in both flooding stoppages and raw sewage concerns. The conditional model for flooding stoppages identifies additional predictors such as educational attainment, race, and groundwater level. For raw sewage concerns, tide emerges as a significant predictor in the conditional and zero-inflation models, with the zero-inflation model identifying precipitation and groundwater level as well. Models were tested through nested cross-validation to validate their robustness. The study underscores the importance of climatic scenarios in predicting service requests, emphasizing their reliability in directing municipal funds for addressing community-identified flooding and sewage problems. Physical parameters like the topographic wetness index show weak predictability in low-relief coastal plains, such as Norfolk, VA. Moreover, the study sheds light on the limited relationship between flooding stoppages and raw sewage service requests, where the combined presence of these reports may indicate potential water quality and health concerns. This research contributes to a nuanced understanding of the multifaceted influences on urban stormwater reporting, facilitating targeted strategies for effective stormwater management.

How to cite: Lerma, N. and Goodall, J.: Understanding socioeconomic, climatic, and physical factors on 311 flooding and raw sewage reports in Norfolk, VA, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14464, https://doi.org/10.5194/egusphere-egu24-14464, 2024.

A.39
|
EGU24-14884
|
ECS
TimeSpatData: An R Package for Spatio-temporal DataProcessing, Analysis, and Visualization
(withdrawn)
Kan Lei, Hans Dürr, and Martina Flörke
A.40
|
EGU24-15690
Susanna Dazzi

Physics-informed neural networks (PINNs) have recently been developed as a novel solution approach for physical problems governed by partial differential equations (PDEs). Compared to purely data-driven methods, PINNs have the advantage of embedding physical constraints in the training process, thus increasing their reliability. Compared to traditional numerical methods for PDEs, PINNs have the advantage of being “meshless”; they are in general less accurate and more computationally expensive, but also more suitable to sparse-data assimilation and to inverse modelling, which is increasing their popularity in many scientific fields. However, hydraulic applications of PINNs in the context of free-surface flows are still in their infancy.

In this work, the effectiveness of PINNs to model one-dimensional free-surface flows over non-horizontal bottom is tested. The governing PDEs are the shallow water equations (SWEs), which represent the mass and momentum conservation in free-surface flows. The inclusion of a spatially variable topography in a meshless method such as PINNs is not trivial. Here, the idea of solving the augmented system of SWEs with topography is exploited. Augmentation consists in treating the bed elevation as a conserved variable (together with water depth and unit discharge) and adding a fictitious equation to the system, which states that this variable is constant in time (i.e., its time derivative is null), while it can be variable in space (its space derivative is included in the bed slope source term). In this way, bed elevation can be easily provided with other initial conditions, and the fixed-bed constraint preserves its value in time.

Different cases of unsteady flows with flat and non-flat bottom are considered, and the accuracy obtained using PINNs with augmented SWEs is checked by comparing PINNs predictions with analytical solutions. Results show that a fair accuracy for depth and velocity can be obtained, even for some challenging test cases such as the dam-break over a bottom step and the planar flow over a parabolic basin (Thacker’s test case). Moreover, it is shown that, if PINNs are applied to a case with horizontal bottom, for which topography is not strictly necessary, similar accuracy and computational time are obtained when PINNs solve standard SWEs or augmented SWEs. It can therefore be concluded that the augmentation of SWEs is a simple but promising strategy to deal with flows over complex bathymetries using PINNs, which paves the way for applications to flows over more realistic topographies.

How to cite: Dazzi, S.: Solving Shallow Water Equations with Topography using Physics-Informed Neural Networks , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15690, https://doi.org/10.5194/egusphere-egu24-15690, 2024.

A.41
|
EGU24-18122
|
ECS
Enhancing SVM’s robustness of weekly streamflow prediction based on three feature selection algorithms
(withdrawn)
Bouchra Bargam, Abdelghani Boudhar, Christophe Kinnard, Karima Nifa, Mostafa Bousbaa, Haytam Elyoussfi, and Abdelghani Chehbouni
A.42
|
EGU24-19399
|
ECS
|
Sanjeev Kumar and Chandra Shekhar Prasad Ojha

The vortex tube silt ejector (VTSE) is an important hydraulic device for reducing sediment deposits in canals, providing a very effective alternative to manual sediment removal. The complex nature of the silt removal process occurs from the spatially varied flow (SVF) in the channel and the rotational flow within the tube. Conventional models often struggle to accurately predict silt removal efficiency due to these complexities. However, recent advancements in machine learning (ML) present robust alternatives for understanding and modelling such complex hydraulic processes. In this study, we explore the application of various ML models, i.e. Support vector machine (SVM), Random forest (RF), and Random tree (RT) to quantify the efficiency of vortex tube silt ejector using laboratory datasets. Comparative analysis are conducted with conventional models to predict the efficacy of ML based models. The findings of the study reveal that the RT model, exhibiting a Root Mean Square Error ( RMSE) of 2.165 and Nash-Sutcliffe Efficiency (NSE) of 0.98 outperforms the other applied ML models, demonstrating suppier accuracy with fewer errors. Sensitivity analysis focuses on the extraction ratio as a critical parameter in computing vortex tube silt ejector removal efficiency. The outcomes derived from the ML- based moldes presented in this study hold effective implications for hydraulic engineers and researchers involved in assessing the sediment removal efficiency of vortex tube silt ejectors. Nevertheless, to formulate a more universally application models for the comprehensive research, both within the same field and in related areas, may be imperative.

How to cite: Kumar, S. and Ojha, C. S. P.: Development and Evaluation of Machine Learning Models for Vortex Tube Sediment Ejector, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19399, https://doi.org/10.5194/egusphere-egu24-19399, 2024.

A.43
|
EGU24-19697
|
ECS
Giuseppe Cipolla, Antonio Francipane, Dario Treppiedi, Calogero Mattina, and Leonardo Valerio Noto

Designing hydraulic infrastructures and/or carry out a flood risk assessment analysis, as mandated by Directive 2007/60/EC of the European Parliament regarding the assessment and management of flood risk, needs estimating flood discharges for different return periods. In the current era, Geographic Information Systems (GIS) make more efficient the integration of spatially distributed data and advanced analytical tools for hydrological applications.

This work introduces a Python-based tool that merges GIS functionalities (i.e., open-source geospatial libraries, such as native QGIS plugins, GDAL, SAGA) with hydrological modeling techniques, providing a comprehensive framework for watershed analysis aimed to derive synthetic flood hydrographs for specified return periods. The tool is composed of different modules, performing different operations: following the delineation of the watershed based on a user-specified outlet, the tool uses a regionalized approach to establish Depth-Duration-Frequency (DDF) curves and derives the synthetic Chicago hyetographs for specified return periods. The tool comprises a module for calculating runoff depths using the Curve Number method and another module where flow hydrographs are derived by using distributed unit hydrograph (D-UH) through a spatial representation of times of concentration, accounting for varying flow velocities within the watershed. Additionally, the tool allows for the simulation of the basin response to historical precipitation. In the present study, the tool underwent testing on catchments of Sicily (Italy) even if it is worth noting that the tool can be customized for application in various regions worldwide.

How to cite: Cipolla, G., Francipane, A., Treppiedi, D., Mattina, C., and Noto, L. V.: Comprehensive Hydrological Modeling Tool for Flood Discharge Estimation in Sicilian Watersheds, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19697, https://doi.org/10.5194/egusphere-egu24-19697, 2024.