Camera Rain Gauge Based on Artificial Intelligence
- 1University of Basilicata, UNIBAS, School of Engineering, Potenza, Italy (raffaele.albano@unibas.it)
- 2National Research Institute for Earth Science and Disaster Resilience NIED, Tuskuba, Japan,
Flood risk monitoring, alert and adaptation in urban areas require near-real-time fine-scale precipitation observations that are challenging to obtain from currently available measurement networks due to their costs and installation difficulties. In this sense, newly available data sources and computational techniques offer enormous potential, in particular, the exploiting of not-specific, widespread, and accessible devices.
This study proposes an unprecedented system for rainfall monitoring based on artificial intelligence, using deep learning for computer vision, applied to cameras images. As opposed to literature, the method is not device-specific and exploits general-purpose cameras (e.g., smartphones, surveillance cameras, dashboard cameras, etc.), in particular, low-cost device, without requiring parameter setting, timeline shots, or videos. Rainfall is measured directly from single photographs through Deep Learning models based on transfer learning with Convolutional Neural Networks. A binary classification algorithm is developed to detect the presence of rain. Moreover, a multi-class classification algorithm is used to estimate a quasi-instantaneous rainfall intensity range. Open data, dash-cams in Japan coupled with high precision multi-parameter radar XRAIN, and experiments in the NIED Large Scale Rainfall Simulator combined to form heterogeneous and verisimilar datasets for training, validation, and test. Finally, a case study over the Matera urban area (Italy) was used to illustrate the potential and limitations of rainfall monitoring using camera-based detectors.
The prototype was deployed in a real-world operational environment using a pre-existent 5G surveillance camera. The results of the binary classifier showed great robustness and portability: the accuracy and F1-score value were 85.28% and 85.13%, 0.86 and 0.85 for test and deployment, respectively, whereas the literature algorithms suffer from drastic accuracy drops changing the image source (e.g. from 91.92% to 18.82%). The 6-way classifier results reached test average accuracy and macro-averaged F1 values of 77.71% and 0.73, presenting the best performances with no-rain and heavy rainfall, which represents critical condition for flood risk. Thus, the results of the tests and the use-case demonstrate the model’s ability to detect a significant meteorological state for early warning systems. The classification can be performed on single pictures taken in disparate lighting conditions by common acquisition devices, i.e. by static or moving cameras without adjusted parameters. This system does not suit scenes that are also misleading for human visual perception. The proposed method features readiness level, cost-effectiveness, and limited operational requirements that allow an easy and quick implementation by exploiting pre-existent devices with a parsimonious use of economic and computational resources.
Altogether, this study corroborates the potential of non-traditional and opportunistic sensing networks for the development of hydrometeorological monitoring systems in urban areas, where traditional measurement methods encounter limitations, and in data-scarce contexts, e.g. where remote-sensed rainfall information is unavailable or has broad resolution respect with the scale of the proposed study. Future research will involve incremental learning algorithms and further data collection via experiments and crowdsourcing, to improve accuracy and at the same time promote public resilience from a smart city perspective.
How to cite: Albano, R., Notarangelo, N., Hirano, K., and Sole, A.: Camera Rain Gauge Based on Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4266, https://doi.org/10.5194/egusphere-egu22-4266, 2022.