EGU26-2512, updated on 13 Mar 2026
https://doi.org/10.5194/egusphere-egu26-2512
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Tuesday, 05 May, 16:15–18:00 (CEST), Display time Tuesday, 05 May, 14:00–18:00
 
Hall A, A.65
Benchmarking fine-tuning strategies for LSTM rainfall-runoff models in the Mekong basin
Rocco Palmitessa1, Connor Chewning1, Jakob Luchner2, and Elbys Jose Meneses2
Rocco Palmitessa et al.
  • 1DHI A/S, Technology & Innovation, Denmark
  • 2DHI A/S, Energy & Water Resources, Denmark

Neurohydrological models, particularly Long Short-Term Memory (LSTM) networks, are increasingly recognized as valid alternatives to conceptual and physics-based Global Hydrological Models (GHMs). Literature suggests that regionally trained and fine-tuned LSTMs typically outperform models trained exclusively on single catchments. To systematically address the benefits of different fine-tuning strategies, this study tested three approaches across 60 catchments in the Mekong basin. We subsequently compared historical LSTM forecasts with simulations from DHI’s GHM, a well-calibrated physics-based model. The objective was to generate insights regarding when data-driven models demonstrate superiority over physics-based counterparts and to identify which fine-tuning approach is most effective for this region.

The study utilized ERA5 forcing data and HydroATLAS basin properties formatted to the CAMELS standard, combined with streamflow observations from 60 non-public stations across the Mekong basin. We selected an off-the-shelf LSTM model from the NeuralHydrology package, pre-trained on the global CARAVAN dataset, and applied three distinct fine-tuning strategies: direct fine-tuning of the Global model to Local data (GL), fine-tuning to Regional data (GR), and a two-step process fine-tuning first on Regional and then on Local data (GRL). For each model, we performed a hyperparameter sweep to maximize the Kling-Gupta Efficiency (KGE). The dataset was divided into 15 years for training, followed by 5 years for validation and 5 years for testing. Performance was benchmarked against the DHI-GHM using KGE and Nash-Sutcliffe Efficiency (NSE) metrics.

Analysis indicates that the GL approach yields the highest KGE in nearly half of the basins, while the GRL approach proves superior in the remaining half; notably fine, the likelihood of GRL being the best-performing approach increases with basin area. Overall, -tuning LSTMs on both regional and local streamflow (GRL) improved performance compared to strictly regional (GR) or local fine-tuning (GL), with the median KGE increasing from 0.65 to 0.72. While this result does not fully match the overall accuracy of the DHI-GHM in the test period (median KGE of 0.75), the fine-tuned LSTM outperformed the physics-based model in all catchments with poorly described processes—such as irrigation abstraction and infiltration after overtopping—where the DHI-GHM yielded a KGE below 0.6. In well-calibrated catchments, performance was comparable. Furthermore, the performance gap narrows when expressed in NSE, as the LSTM model outperformed the DHI-GHM in terms of mean NSE, despite a lower median NSE.

These findings suggest that while calibrated physics-based models remain robust, neurohydrological approaches offer distinct advantages in representing complex or unmodeled physical processes. The study highlights that the optimal training strategy is scale-dependent, with multi-step fine-tuning providing greater benefits for larger basins. Ultimately, the ability of LSTMs to outperform traditional models in areas with complex anthropogenic or structural challenges suggests they are a vital, complementary tool for enhancing hydrological predictability.

How to cite: Palmitessa, R., Chewning, C., Luchner, J., and Meneses, E. J.: Benchmarking fine-tuning strategies for LSTM rainfall-runoff models in the Mekong basin, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2512, https://doi.org/10.5194/egusphere-egu26-2512, 2026.