EGU24-1868, updated on 08 Mar 2024
https://doi.org/10.5194/egusphere-egu24-1868
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Utilizing convolutional neural networks for ground-based cloud observations

Markus Rosenberger, Manfred Dorninger, and Martin Weissmann
Markus Rosenberger et al.
  • University of Vienna, Department of Meteorology and Geophysics, Meteorology, Austria

Clouds of any kind play a substantial role in a wide variety of atmospheric processes, e.g. radiation, and moisture transport. Moreover, knowledge of currently occurring cloud types allows the observer to draw conclusions about the short-term evolution of the state of the atmosphere and hence also the weather. However, the number of operational cloud observations is rather decreasing than increasing due to high monetary and personnel expenses. 

To counteract this trend, we train a multi-input residual neural network architecture from scratch with ground based RGB images, where each instance consists of 4 images. Human cloud observations from nearby SYNOP station Vienna Hohe Warte are used as ground truth. To the best of our knowledge we are the first to classify clouds with this methodology into 30 different classes, which are consistent with the state-of-the-art scheme for operational cloud observations. Of these 30 classes up to three can be observed simultaneously in the same instance, making this a multi-input multi-label classification problem. Additional difficulty stems from highly imbalanced ground truth class distributions, with the most abundant cloud class being observed several hundred times more frequently than the least abundant class, leading to strong biases in the model output. To reduce these biases, class-specific resampling methods are used, which increase the total number of available instances from less than 12,000 to more than 20,000. This resampling is fundamental to get sufficient results.

We conducted a large number of experiments covering a variety of model architectures, as well as different loss and class weighting functions. Preliminary results will be shown, which indicate very high precision and sufficiently high recall in most classes of the validation data, especially in those where aggressive resampling strategies have been used. The performance is even better, when classes with visual similarities are combined during validation. Thus, a substantial portion of false predictions can be explained by the model's confusion of similar-looking classes. Results also show that biases due to class imbalances are heavily reduced but are still present. Overall our classifier also shows exceptionally good reliability.

With such a machine learning method and a common camera system, clouds can be observed independently and operationally where no human observations are available. This also allows a permanent monitoring of the current state of the weather as well as its short-time evolution. Apart from this, further applications of such an automated cloud classifier may be model verification, or cloud monitoring with high temporal resolution in the proximity of solar power plants. There, upcoming clouds can substantially change the possible energy output, which leads to the necessity of taking precautions. 

How to cite: Rosenberger, M., Dorninger, M., and Weissmann, M.: Utilizing convolutional neural networks for ground-based cloud observations, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1868, https://doi.org/10.5194/egusphere-egu24-1868, 2024.

Supplementary materials

Supplementary material file

Comments on the supplementary material

AC: Author Comment | CC: Community Comment | Report abuse

supplementary materials version 1 – uploaded on 19 Apr 2024, no comments