EGU22-2799, updated on 27 Mar 2022
EGU General Assembly 2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

Surveillance audio-based rainfall observation: a crowdsourcing approach

Xing Wang1,2,4, Xuejun Liu1,2,3, Thomas Glade4, and Meizhen Wang1,2,3
Xing Wang et al.
  • 1Key Laboratory of Virtual Geographic Environment (Nanjing Normal University), Ministry of Education, Nanjing, 210023, China (
  • 2State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing, 210023, China
  • 3Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing, 210023, China
  • 4Universit of vienna, Department of Geography and Regional Research, Austria

Rainfall data with high spatiotemporal resolutions are of great value in many research fields, such as meteorology, hydrology, global warming, and urban disaster monitoring. Current rainfall observation systems include ground-based rain gauges, remote sensing-based radar and satellites. However, there is an increasing demand of the spatiotemporal rainfall data with high resolution. Thanks to the advocacy from many research institutions and international organizations, several innovative crowdsourcing ideas including opportunistic sensing and citizen science initiatives have been followed in recent years. Commercial cellular communication networks, windshield wipers or optical sensors in moving vehicles, smart phones, social medias, and surveillance cameras/videos have been identified as alternative rain gauges. In particular environmental audio recordings are a rich and underexploited source to identified and even characterize rainfall events.
Widespread surveillance cameras can continuously record rainfall information, which even provides a basis for the possibility of rainfall monitoring. Comparing the aforementioned methods, surveillance audio-based rainfall estimation has been discussed in existing studies with advantages of high-spatiotemporal-resolution, low cost and all-weather. Therefore, this study focuses on mining the rainfall information from urban surveillance audio for quantitative inversion on precipitation. Rain sound is generated by the collision of rain particles with other underlying objects in the process of falling. In real applications, the complex subsurface structure and random background noises from human activity in urban areas make surveillance rainfall sound vulnerable and surveillance audio-based rainfall estimation more challenging. In our study, the rainfall acoustic indicators were selected for rainfall sound representation. Deep learning-based rainfall observation systems were built based on urban surveillance audio data. Experimental results show the efficiency of our system in rainfall estimation. Our research is a new attempt to develop crowdsourcing-based rainfall observations, which can also provide a beneficent supplement to the current rainfall observation networks.

How to cite: Wang, X., Liu, X., Glade, T., and Wang, M.: Surveillance audio-based rainfall observation: a crowdsourcing approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2799,, 2022.

Corresponding presentation materials formerly uploaded have been withdrawn.