EGU23-3928
https://doi.org/10.5194/egusphere-egu23-3928
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Comparison of deep learning approaches to monitor trash screen blockage from CCTV cameras

Remy Vandaele1, Sarah L Dance1,2,3, and Varun Ojha4
Remy Vandaele et al.
  • 1Meteorology Department, University of Reading, Reading, UK (r.a.vandaele@reading.ac.uk,s.l.dance@reading.ac.uk)
  • 2Department of Mathematics and Statistics, University of Reading, Reading, UK (s.l.dance@reading.ac.uk)
  • 3National Centre for Earth Observation, Reading, UK (s.l.dance@reading.ac.uk)
  • 4Department of Computer Science, University of Reading, Reading, UK (v.k.ojha@reading.ac.uk)

We investigate the use of CCTV cameras and deep learning to automatically monitor trash screen blockage. 

Trash screens are installed to prevent debris from entering critical parts of river networks (pipes, tunnels, locks,...). When the debris piles up at the trash screens,  it  may block the waterway and can cause flooding. It is thus crucial to clean blocked trash screens and avoid flooding and consequent damage. Currently, the maintenance crews must manually check a camera or river level data or go on site to check the state of the screen to know if it needs cleaning. This wastes valuable time in emergency situations where blocked screens must be urgently cleaned (e.g., in case of forecast  heavy rainfall). Some initial attempts at trying to predict trash screen blockage exist. However, these have not been widely adopted in practice.  CCTV cameras can be easily installed at any location and can thus be used to monitor the state of trash screens, but the images need to be processed by an automated algorithm to inform whether the screen is blocked.

With the help of UK-based practitioners (Environment Agency and local councils), we have created a dataset of 40000 CCTV trash screen images coming from 36 cameras, each labelled with blockage information. Using this database, we have compared 3 deep learning approaches to automate the detection of trash screen blockage: 

  • A binary image classifier, which takes as input a single image, and outputs a binary label that estimates whether the trash screen is blocked.
  • An approach based on anomaly detection which tries to reconstruct the input image with an auto-encoder trained on clean trash screen images.  In consequence, blocked trash screens are detected as anomalies by the auto-encoder.
  • An image similarity estimation approach based on the use of a siamese network, which takes as input two images and outputs a similarity index related, in our case, to whether both images contain trash. 

Using performance criteria chosen in discussion  with practitioners (overall accuracy, false alarm rate, resilience to luminosity / moving fields of view, computing capabilities), we show that deep learning can be used in practice to automate the identification of blocked trash screens. We also analyse the strengths and weaknesses of each of these approaches and provide guidelines for their application.

How to cite: Vandaele, R., Dance, S. L., and Ojha, V.: Comparison of deep learning approaches to monitor trash screen blockage from CCTV cameras, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-3928, https://doi.org/10.5194/egusphere-egu23-3928, 2023.

Supplementary materials

Supplementary material file