EMS Annual Meeting Abstracts
Vol. 20, EMS2023-9, 2023, updated on 06 Jul 2023
EMS Annual Meeting 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

WTF: Where There's Fog, Detecting Low Visibility Conditions on Highways Using Multimodal Computer Vision

Pedro Jeuris1,2, Giuliano Andrea Pagani2, and Mirela Popa1
Pedro Jeuris et al.
  • 1Department of Advanced Computing Sciences, Maastricht University, Maastricht, The Netherlands (p.jeuris@student.maastrichtuniversity.nl)
  • 2Royal Netherlands Meteorological Institute (KNMI), De Bilt, The Netherlands (andrea.pagani@knmi.nl)

The phenomenon of fog poses a considerable risk to transportation by land, air and sea. The resulting lower visibility conditions on land can cause road accidents which in turn can result in economical damages, material damages and in the worst cases human casualties too. This raises the need of a wide-spread fog monitoring and warning system to prevent these sort of dreadful events. Specialized equipment to detect fog exists, but is expensive, which makes a large-scale use infeasible. Instead we propose the use of an infrastructure already available, such as traffic monitoring cameras, in combination with Deep Learning Computer Vision techniques. This approach has several advantages such as 1) no specialized or additional equipment is necessary, 2) predictions can be visually verified by an operator before taking a decision to raise a fog related warning and 3) cheap scalability in terms of adding cameras where and when needed. In this work, two well established computer vision models (i.e., ResNet and ViT) are tested on this task and explainable AI techniques are applied to better understand which parts of the input are important for the classification task. Furthermore, we extend these computer vision models to a multimodal setting by adding meteorological variables to increase the classification performance. To our knowledge this is the first work combining visual inputs and meteorological data for the task of fog detection. Additionally, given the limited amount of annotated images and the high effort required to label them, we also explored the application of self-supervised methods to increase the performance. We obtained promising results, even though we did not manage to surpass the accuracy of the supervised approach. The best setting for training achieves an accuracy of 91% and an F1 score of 85% on the test set.

How to cite: Jeuris, P., Pagani, G. A., and Popa, M.: WTF: Where There's Fog, Detecting Low Visibility Conditions on Highways Using Multimodal Computer Vision, EMS Annual Meeting 2023, Bratislava, Slovakia, 4–8 Sep 2023, EMS2023-9, https://doi.org/10.5194/ems2023-9, 2023.