- 1National Taiwan University, Civil Engineering, Computer-Aided Engineering Division, Taiwan (r13521608@ntu.edu.tw)
- 2National Taiwan University, Civil Engineering, Computer-Aided Engineering Division, Taiwan (r13521608@ntu.edu.tw)
Rainfall field reconstruction from sparse gauge observations has always been a challenge in hydrometeorology. Traditional geostatistical approaches, such as Ordinary Kriging (OK) and associated geostatistical-based data merging methods, have been widely used (Matheron., 1963; Oliver and Webster, 1990; Sideris et al, 2014). Despite being generally promising, these models often struggle to maintain spatial-temporal consistency while preserving fine-scale features due to the limitation of their underlying statistical assumptions.
Recent research works have aimed to address this limitation with machine learning (Appleby et al., 2020; Harris et al., 2022; Price and Rasp., 2022; Nag et al., 2023; Hsu et al., 2024; Chen et al. 2024). For example, Hsu et al. (2024) introduced a deep-learning approach for downscaling precipitation data, thereby enhancing the representation of fine-scale structures. In contrast, Chen et al. (2024) combined spatial basis function modeling with neural network-driven feature learning to achieve both high accuracy and interpretability in geospatial interpolation.
However, to our knowledge, existing methods have not fully addressed the temporal coherence in precipitation field reconstruction, specifically in maintaining spatial-temporal patterns across consecutive frames. Moverover, many of these methods assume a simple averaging relationship between point measurements and areal precipitation, overlooking the complex scale discrepancy between rain gauge observations and their representative areal means. These deficiencies tend to result in spatially overly smooth fields or low correlation between consecutive frames.
To overcome these limitations, we present a reconstruction method that integrates Convolutional Neural Networks (CNNs) with Generative Adversarial Networks (GANs). Specifically, it incorporates three key innovations: (1) a multi-scale convolution kernel for capturing diverse spatial dependencies, (2) a Fast Fourier Convolution implementation for high-frequency signal preservation, and (3) an adaptive noise injection mechanism that enriches textural details based on local complexity measures.
To evaluate the proposed method, an experiment, using high-resolution radar images over a 64x64 gridded domain, is designed. Within each 10x10 sub-domain, a known point is arbitrarily chosen, and data values at these point locations remain known across the entire event. Our task is to use these known points to predict (or to interpolate) the rest of the image at each time step. The training dataset comprises 1-km Nimrod precipitation fields at 5-min intervals, covering a 64 × 64 km² domain centered on Birmingham city in the UK, spanning from 2016 to 2020. The validation dataset consists of 20 selected storm events between 2021 and 2022. The Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are used to assess the prediction result. In addition, the Radial Averaged Power Spectral Density (RAPSD) is employed to compare the power spectral density across different frequency ranges, allowing us to assess the reconstruction quality of fine-scale details and overall coarse-scale features in the images.
Preliminary results indicate substantial improvements over traditional Ordinary Kriging methods in both accuracy and computational efficiency; on average, the MAE decreased by 37%, the RMSE reduced by 22%. In addition, the RAPSD results demonstrate an improvement in capturing spatial details. These findings underscore the considerable potential of deep learning techniques for enhancing the spatial-temporal reconstruction of precipitation fields.
How to cite: Wang, B.-Z. and Wang, L.-P.: From points to images: A deep-learning enhanced spatial-temporal reconstruction of precipitation data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-4537, https://doi.org/10.5194/egusphere-egu25-4537, 2025.