How to capture AI-friendly Pipe Inspection Footage
As VAPAR’s CTO, it’s safe to say I’ve got a good familiarity with which inspection footage works well (and which doesn’t) for automated pipe inspections using artificial intelligence (AI).
Over the last few years, the capability of image recognition AI models have improved significantly, meaning automation is a universally serious time-saver for many organisations looking to optimise or streamline their image based assessments.
Although accuracy of artificial intelligence has improved over this time, the results which AI models are able to produce can sometimes be limited by the characteristics of the inspection footage which they are fed. If Contractors are looking to maximise the results they can achieve for themselves and their clients using AI, there’s definitely some recommendations I’ve observed which should be followed.
As different AI vendors may have different ways of handling challenges and developing solutions. I’ve tried to cover each point with a generalist approach. Many of these challenges would also be true of a person trying to provide a condition assessment based on the footage alone.
Challenges and Limitations
Firstly, to get some better context around the recommendations, I’ll outline the main challenges and limitations of AI for automated CCTV coding I’ve observed during my time with VAPAR.
Generally, pipe inspection standards will define a number of codes to be used which require granular detail which is not reliably achievable for operators or software without quantitative computer vision and tracking of camera telemetry.
Sizing of Features
Determining the size of features within millimetre accuracy is a challenging task for software and human operators alike. As an alternative criterion for software, categorisation of defect severity could be undertaken using relative categories, such as ‘small’, ‘medium’ and ‘large’, that are aligned with quantitative ranges.
Using 12 segments (named to align with clock references) can be challenging depending on the amount of panning, tilting and zooming that the operator undertakes during the inspection. Quadrants or eighths would likely yield more consistent results from both manual and automated assessment.
Soil Through Defect
Currently, distinguishing the difference between soil visible through a defect, debris sitting inside a pipe, or roots can be a difficult task for AI.
Start & Finish Nodes
Start nodes may not always be present in footage captured by CCTV contractors. Furthermore, the type of maintenance hole used to access pipes can be difficult for AI to ascertain. Inspection footage is typically started from the centreline of the maintenance hole pointed directly down the barrel of the pipe to be inspected. These nodes are typically evident to the CCTV operator as they require entry to perform the inspection.
It can be difficult to determine whether defects are discrete or continuous when a CCTV camera is moving through a pipe. This is due to the capture of the defects jumping in and out of frame during camera operation.
Multiple Assets in a Single Video
Where a CCTV camera travels through more than one asset, AI will need a way of identifying this distinction and handling the condition assessment of the assets separately. Otherwise the defects detected would all be assumed to be part of a single pipe asset which is incorrect.
Multiple inspection time frames captured in a Single Video
Where a camera operator approaches an issue that needs to be immediately resolved (such as a blockage), they can stop the recording of the footage, clear the issue, and resume recording again. Where the halted inspection footage and completed inspection footage for the same asset are in a single video, AI needs a way of identifying this distinction between previous or ‘abandoned’ footage vs.‘completed’, and then overriding the abandoned condition assessment with that of the completed footage.
Shape or Dimensions Change
Where pipe shape or dimensions change, quantifying the extent of this change can be difficult to determine when using visual inspection footage alone.
Now that I’ve outlined the core problems we’ve encountered with AI for automated CCTV coding, let’s cover some tips to ensure you’re capturing AI-friendly pipe inspection footage:
There are a number of standard procedures that operators can apply to ensure inspection footage is optimised for use with AI pipe assessments. Areas where standardised procedure can be introduced to great effect are:
- Standardising the asset information block at the start of footage capture
- Standardising the chainage on-screen display positioning.
- Standardising a requirement for the CCTV camera head to be centred within the pipe and field of view also centred (to see equally the top and bottom of the pipe).
There are also a number of procedural restrictions which CCTV operators can observe in order to create footage optimised for AI-based pipe assessments. These include:
- Restriction of cleaning inspection footage (i.e. CCTV capture during jetting, where the jetting head is visible throughout the footage and obscure the field of view) used for condition assessment.
- Restriction on reversing significant distances through the pipe – this can cause offsets in the chainage measurement and also cause problems for the AI, which will duplicate the detection of defects and features.
- Restriction on zooming whilst moving (either driving forward or panning), as this can make this camera movement difficult to track.
- Restriction on stopping and starting the capture of footage within a single video, i.e. where cleaning is performed or the camera is moved without recording, the inspection should be taken in a single pass.
These recommendations are some of the main components we’ve identified that have the ability to impact the post processing of video files – either by AI or by an inspector.