EGU24-19085, updated on 08 Apr 2024
https://doi.org/10.5194/egusphere-egu24-19085
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Rainfall Intensity Estimation Based on Raindrops Sound: Leveraging the Convolutional Neural Network for Analyzing Spectrogram

Seunghyun Hwang1, Jinwook Lee2, Jongyun Byun3, Kihong Park4, and Changhyun Jun5
Seunghyun Hwang et al.
  • 1Chung-Ang University, Department of Civil Engineering, Seoul, Korea, Republic of (hwanghnj@cau.ac.kr)
  • 2University of Hawaii at Manoa, Department of Civil and Environmental Engineering, Hawaii, USA (jinwookl@hawaii.edu)
  • 3Chung-Ang University, Department of Civil Engineering, Seoul, Korea, Republic of (whddbs0932@cau.ac.kr)
  • 4Chung-Ang University, Department of Civil Engineering, Seoul, Korea, Republic of (alfhfhrl@cau.ac.kr)
  • 5Chung-Ang University, Department of Civil and Environmental Engineering, Seoul, Korea, Republic of (cjun@cau.ac.kr)

In this study, we propose a novel approach for precipitation measurement based on rainfall acoustics, utilizing an effective rainfall acoustic collection device with low-cost IoT sensors housed in waterproof enclosure. Here, rainfall acoustics refer to the sound generated when raindrops fall and collide with surfaces such as the ground or canopy. Even at the same rainfall intensity, depending on the medium with which raindrops collide, acoustics with different frequency characteristics may occur. In this research, an acoustic collection device, combining a Raspberry Pi and a condenser microphone, was inserted into a waterproof enclosure and deployed in a rainfall environment to collect rainfall acoustics. This approach not only controls the medium of rainfall acoustics but also effectively blocks ambient noise and water, ensuring consistent characteristics of rainfall acoustics regardless of the installation environment. The collected rainfall acoustics were segmented into 10-second intervals, and spectrograms in the frequency domain were extracted by applying Short-Time Fourier Transform for each segment. Finally, using the extracted spectrogram as input data, a rainfall intensity estimation model based on a convolutional neural network was developed and other precision rainfall observation instruments (e.g., PARSIVEL, Pluvio², etc.) were considered collectively for the validation of the developed rainfall intensity estimation model. Acoustic-based rainfall observation enables the establishment of a dense observation network using low-cost devices. Leveraging the high temporal resolution of acoustic data, extremely short observation periods for rainfall can be achieved. This methodology presents an opportunity for cost-effective and high-spatiotemporal-resolution rainfall observation, overcoming the limitations of traditional methods.

Keywords: Acoustic Sensing, Rainfall Acoustics, Precipitation, Convolutional Neural Network

Acknowledgement

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2023-00250239) and in part by the Korea Meteorological Administration Research and Development Program under Grant RS-2023-00243008.

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.NRF-2022R1A4A3032838).

How to cite: Hwang, S., Lee, J., Byun, J., Park, K., and Jun, C.: Rainfall Intensity Estimation Based on Raindrops Sound: Leveraging the Convolutional Neural Network for Analyzing Spectrogram, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19085, https://doi.org/10.5194/egusphere-egu24-19085, 2024.