One of the key contributors in originating flash floods is the blockage of cross-drainage hydraulic structures, such as culverts, by unwanted, flood-borne…
One of the key contributors in originating flash floods is the blockage of cross-drainage hydraulic structures, such as culverts, by unwanted, flood-borne debris being transported.
The accumulation and interaction of debris with culverts often result in reduced hydraulic capacity, diversion of upstream flows, and structural failure. For example, the Newcastle, Australia floods in 2007, Wollongong, Australia floods in 1998 and Pentre, United Kingdom floods in 2021, are just a few instances where blockages were reported as a primary reason for cross-drainage hydraulic structure failure.
In this post, we describe our technique for building a diverse visual dataset for computer vision model training, including examples of synthetic images. We break down each component of our solution and provide insights on future research directions.
Problem
Non-linear debris accumulation, the unavailability of real-time data, and complex hydrodynamics suggested the invalidity of a conventional numerical modeling-based approach to address the problem. In this context, post-flood visual information was used to develop the blockage policies involving several assumptions, which many argue are not a true representative of blockage.
This suggests the need for better understanding and exploring the blockage issue from a technology perspective to aid flood management officials and policymakers.
StopBlock: A technology initiative to monitor the visual blockage of culverts
To help address the blockage problem, StopBlock was initiated as a part of SMART Stormwater Management. Overall, this project involved collaboration between city councils in the Illawarra (Wollongong, Shellharbour, and Kiama) and Shoalhaven regions, Lendlease, and the University of Wollongong’s SMART Infrastructure Facility.
StopBlock aims to assess and monitor the visual blockage at culverts in real time using the latest technologies:
- Artificial intelligence
- Computer vision
- Edge computing
- Internet of Things (IoT)
- Intelligent video analytics
In addition, we build and deployed an artificial intelligence of things (AIoT) solution using NVIDIA edge computing, the latest computer vision detection and classification models, a CCTV camera, and a 4G module. The solution detected the visual blockage status (blocked, partially blocked, or clear) at three culvert sites within the Illawarra region.
Building visual datasets for computer vision model training
Training computer vision CNN models requires numerous images related to the intended task. The problem of culvert blockage detection has not been addressed from this perspective before. No database of image data and datasets exists for this purpose.
We developed a new training database consisting of diverse image data related to culvert blockage. These images showed varying culvert types, debris types, camera angles, scaling, and lighting conditions.
Limited data from real culvert blockage was available through the city council records. We adopted the idea of using the combination of real, lab-simulated, and synthetic visual data.
Images of culvert openings and blockage
We collected real images of culverts (blocked and clear) from multiple sources:
- City council historical records
- Online repositories
- Local culvert sites
The collected images represent great diversity in terms of culvert types, debris types, illumination conditions, camera viewpoints, scale, resolution, and even backgrounds. The images of culvert openings and blockages (ICOB) dataset consisted of 929 images in total.
Visual hydraulics-lab blockage dataset
We collected simulated images from scaled laboratory experiments to optimize the existing visual dataset, as not enough real images were available.
A thorough hydraulics laboratory investigation was performed where a series of experiments used scaled physical models of culverts. Blockage scenarios used scaled debris (urban and vegetative) under various flooding conditions.
The images represented diversity in terms of culvert types (single circular, double circular, single box, or double box), blockage types (urban, vegetative, or mixed), simulated lighting conditions, camera viewpoints (two cameras), and flooding conditions (inlet discharge levels). However, the dataset was limited in terms of reflections, clear water, identical background, and identical scaling.
In total, we collected 1,630 images from these experiments to establish the VHD dataset.
Synthetic images of culverts
We generated synthetic images of culverts (SIC) using a three-dimensional computer application based on the Unity gaming engine with the goal of enhancing the datasets for training.
The application is specifically designed to simulate culvert blockage scenarios and can generate virtually countless instances of blocked culverts with any possible blockage situation that you can think of. You can also alter culvert types, water levels, debris types, camera viewpoints, time of the day, and scaling.
The app design enables you to select scene features from dropdown menus and drag debris objects from a library to place anywhere in the scene with any possible orientation. You can write code using parameters to recreate multiple scenarios and batch capture the images with corresponding labels, to aid the training process.
Some highlighted limitations included unrealistic effects and animations and a single natural background. Figure 3 shows samples from the SIC dataset.
AIoT system development
We developed an AIoT solution using edge computing hardware, computer vision models, and sensors for the real-time visual blockage monitoring at culverts:
- A CCTV camera to capture the culvert.
- NVIDIA TX2–powered edge compute to process and infer blockage images using trained computer vision models.
- 4G connectivity to transmit blockage-related data to a web-based dashboard.
- Computer vision models to detect and classify the visual blockage at culverts.
More specifically, in terms of software, a two-stage detection-classification pipeline is adopted (Figure 4).
Detection stage
In the first stage, a computer vision object detection model (YOLOv4) is used to detect the culvert openings. The detected openings are cropped from the original image and are processed for the classification stage. If no culvert opening is detected, an alert is issued to suggest that the culvert might be submerged.
Classification stage
At the second stage, a CNN classification model such as ResNet-50) is used to classify the cropped culvert openings into one of three blockage classes (blocked, partially blocked, or clear). The blockage-related information is then transmitted to a web dashboard for flood management officials to facilitate the decision-making process.
We trained the YOLOv4 and ResNet-50 models used for detection and classification, respectively, using the NVIDIA TAO platform powered by Python, TensorFlow, and Keras. We used a Linux machine equipped with the NVIDIA A100 GPU for training the models using images from the ICOB, VHD, and SIC datasets.
Here’s the four-stage approach adopted for development:
- Stage I: We prepared a dataset from real and simulated images.
- Stage II: We selected detection and classification models from the NVIDIA TAO model zoo and trained them using the TAO platform.
- Stage III: We exported trained models to be deployed on the NVIDIA TX2 edge computer.
- Stage IV: In the field, we deployed a complete hardware system and collected real data for fine-tuning the computer vision algorithms.
Relating to software performance, the culvert opening detection model achieved the validation mAP of 0.90 while the blockage classification model achieved a validation accuracy of 0.88.
We developed an end-to-end video analytics pipeline on the NVIDIA DeepStream 6 SDK, using the trained computer vision models to make the inference on the NVIDIA TX2-powered edge computer. Using these detection and classification models, the DeepStream pipeline achieved the FPS of 24.8 for NVIDIA TX2 hardware.
We built the smart device for culvert blockage monitoring using a CCTV camera, NVIDIA TX2 edge computer, and 4G dongle (Figure 5). We optimized the developed hardware for power consumption and computational time for real-time utility. Powered by a solar panel, the hardware consumes only 9.1W average power. The AIoT solution is also configured to transmit the blockage metadata every hour to the web dashboard.
The solution is configured to consider the privacy issues and avoid storing any images on board or in the cloud. Instead, it only processes the images and transmits the blockage metadata. Figure 5 shows the installation of the AIoT hardware at one of the remote sites to monitor the culvert visual blockage.
Future research directions
The potential of computer vision can be further explored to establish a better understanding of visual blockage by extracting blockage-related information:
- Percentage visual blockage estimation
- Flood-borne debris type recognition
- Partially automated visual blockage classification
Percentage visual blockage estimation
In the context of flood management decision making, knowing the blockage status of a given culvert is not always enough to make a maintenance-related decision. Going one step further and estimating the percentage visual blockage at a given culvert assists flood management officials in prioritizing the culverts with high visual blockage.
A segmentation-classification pipeline to segment the visible openings from image and classifying the segmented masks into one of four percentage visual blockage classes can be one of the potential solutions. Figure 6 shows the conceptual block diagram for the percentage visual blockage estimation.
Flood-borne debris type recognition
The type of flood-borne debris interacting and accumulating at the culvert can result in distinct flooding impacts. Usually, vegetative debris is considered less concerning because of its porous nature in comparison to compact, urban debris.
Automatic detection of debris type is another crucial aspect to be explored.
Partially automated visual blockage classification
A CNN classification model may be used to facilitate the manual culvert inspections as a simplistic solution while keeping the flood management official in the loop. Given the complexity of the problem and preliminary analysis, it is not possible to only use a CNN classification model to automate the process. However, a partially automated framework can be developed to facilitate the process.
Figure 7 shows the concept of such a framework based on the classification probability of the trained model. If the classification probability for a given image is less than a given threshold, it can be flagged to flood management officials for cross-validation.
Summary
We provided an edge-computing solution for the visual blockage detection at the culverts to assist the timely maintenance and to avoid the blockage-related flooding events.
A classification-detection computer vision model is developed and deployed using the NVIDIA edge-computing hardware to retrieve the blockage status of a culvert as “clear,” “blocked,” or “partially blocked.” To facilitate the training of computer vision models for this unique problem domain, we used simulated and artificially generated images related to culvert visual blockage.
There is a tremendous scope of extending the provided solution in multiple ways to achieve further improved and additional visual blockage information. Estimation of percentage visual blockage, detection of flood-borne debris, and developing a partially automated visual blockage classification framework are a few potential enhancements that can be made within the existing solution.