EGU26-16074, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-16074
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Wednesday, 06 May, 14:00–14:10 (CEST)
 
Room 2.44
Development of a Deep Learning–Based Satellite Precipitation Product for Hydrological Applications and Evaluation of Its Reproducibility for Extreme Rainfall
Kansei Fujimoto1 and Taichi Tebakari2
Kansei Fujimoto and Taichi Tebakari
  • 1Graduate school of science and engineering, Civil, Human and Environmental Science and Engineering Course, Chuo University, Tokyo, Japan (a18.7asf@g.chuo-u.ac.jp)
  • 2Dept. of Civil and Environmental Engineering, Chuo University, Tokyo, Japan (ttebakari896@g.chuo-u.ac.jp)

In many countries and regions of Southeast Asia, meteorological observation networks remain sparse, and high-accuracy precipitation data with rapid latency which are crucial for disaster mitigation are still not operationally available. Existing satellite precipitation products include those from the Global Precipitation Measurement Mission (GPM), such as NASA IMERG, and the JAXA GSMaP products. However, the near-real-time versions IMERG Early Run, GSMaP NOW, and GSMaP NRT exhibit data latencies of approximately 4 h, 1 h, and 4 h, respectively. Moreover, their estimation accuracies differ, and the most rapid product, GSMaP NOW, still shows limitations even in the qualitative detection of heavy rainfall.

Therefore, the objective of this study is to develop a near-real-time satellite precipitation product by integrating high-frequency infrared imagery from geostationary meteorological satellites with the GSMaP series. The datasets used include infrared imagery from the geostationary meteorological satellites Himawari‑8/9, microwave-based precipitation estimates from the GSMaP series, and elevation data derived from MERIT DEM. Precipitation estimation is performed using a deep-learning approach, in which infrared imagery, microwave precipitation data, and elevation data are used as input variables, and the output is a rainfall distribution with a spatial resolution of 2 km. The model is trained using meteorological radar data over Japan and subsequently applied to Southeast Asia.

The estimated precipitation product has a spatial resolution of 2 km, a temporal resolution of 10 min, and a data latency of 1 h. The results demonstrate that the proposed product successfully reproduces the heavy rainfall event that occurred in southern Thailand in late November 2025 and outperforms the existing GSMaP products.

How to cite: Fujimoto, K. and Tebakari, T.: Development of a Deep Learning–Based Satellite Precipitation Product for Hydrological Applications and Evaluation of Its Reproducibility for Extreme Rainfall, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16074, https://doi.org/10.5194/egusphere-egu26-16074, 2026.