EGU25-7742, updated on 14 Mar 2025
https://doi.org/10.5194/egusphere-egu25-7742
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 02 May, 08:30–08:40 (CEST)
 
Room 3.16/17
A reinforcement learning approach for parameter optimization in the SWAT-C model
Byeongwon Lee, Hyemin Jeong, Younghun Lee, and Sangchul Lee
Byeongwon Lee et al.
  • Department of Environmental Science & Ecological Engineering, College of Life Sciences & Biotechnology, Korea University, Seoul, Republic of Korea

Parameter calibration of complex environmental models remains a significant challenge in watershed management, particularly when integrating multiple biogeochemical processes. Reinforcement learning (RL) has emerged as a promising approach in solving complex optimization problems with its ability to learn optimal strategies through continuous interaction and feedback. This study presents SWAT-C-RL, a novel approach that combines the Soil and Water Assessment Tool-Carbon (SWAT-C) and RL for efficient multi-objective parameter calibration. We implement a multi-agent degenerate proximal policy optimization framework that uniquely addresses the structural characteristics of SWAT-C models by optimizing both hydrological and carbon cycle parameters simultaneously. Each agent specializes in distinct parameter sets while coordinating through a shared reward mechanism, enabling comprehensive model calibration with reduced computational demands. The methodology was validated across two geographically and environmentally distinct watersheds: the Tuckahoe Creek Watershed (TCW, 220.7km2) in the United States and the Miho River Watershed (MRW, 1,855km2) in South Korea. The two watersheds, with their varying sizes, climate patterns, topography, and land use distributions, provided a test of the model's adaptability in simulating both water and carbon dynamics. Model performance will be evaluated using Nash-Sutcliffe Efficiency (NSE) and Percent Bias (P-bias) metrics. The performance of SWAT-C-RL would be compared against traditional Sequential Uncertainty Fitting version 2 (SUFI-2) calibration. The findings from this study would show the potential of integrated reinforcement learning approaches in environmental modeling, particularly for complex multi-objective calibration problems.

How to cite: Lee, B., Jeong, H., Lee, Y., and Lee, S.: A reinforcement learning approach for parameter optimization in the SWAT-C model, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-7742, https://doi.org/10.5194/egusphere-egu25-7742, 2025.