In a paper published in the journal Drones, researchers introduced an innovative automated system for monitoring and maintaining remote gravel runways in Northern Canada.
This system addressed the challenges of isolation and harsh weather by using unmanned aerial vehicles (UAVs) and computer vision with deep learning (DL) algorithms. It detected runway defects like water pooling, vegetation growth, and surface irregularities by analyzing high-resolution UAV imagery through a combined vision transformer model and image processing techniques.
Beyond defect detection, it also evaluated runway smoothness to enhance air transport safety and reliability in these regions. Real-world experiments across multiple remote airports validated the effectiveness of this UAV and DL-based approach over traditional manual inspection methods.
Background
Past work has focused on using deep learning techniques like convolutional neural networks (CNNs) and computer vision to detect defects on airport runways made of asphalt or concrete surfaces. Some studies have utilized UAVs to capture aerial imagery and apply image segmentation algorithms for automated crack detection and pavement condition assessment. However, more research must be dedicated to gravel runway inspection, which requires different approaches due to the unique characteristics of gravel surfaces. The existing methods for gravel runway evaluation still heavily rely on manual processes.
Gravel Runway Analysis
The smoothness of a gravel runway is evaluated by quantifying surface irregularities while distinguishing them from the normal runway texture. It is achieved using the bilateral filter algorithm, which highlights differences from the original image while preserving edges.
Morphological operations like erosion and dilation and the Ramer-Douglas-Peucker algorithm for contour approximation refine the results to retain only relevant irregularities. Finally, a modified sigmoid function rates the runway condition on a 1-5 scale, with higher values indicating a greater need for maintenance.
The methodology involves training the model on a dataset of 4K RGB images captured by UAVs at 40-70m altitude over six remote airports in Northern Canada. Images were pre-processed by resizing 1024x1024 pixels and augmented through flipping, saturation, and exposure adjustments. Key features like water pooling, vegetation, and runway edges were manually annotated for supervised learning.
Performance is evaluated using standard metrics for image segmentation tasks - Intersection over Union (IoU), accuracy, F-score, precision, and recall. IoU measures the overlap between predicted and ground truth regions, accuracy counts correct predictions, F-score combines precision and recall, precision captures true positives among predicted positives, and recall finds true positives among actual positives.
The bilateral filter, morphological operations, Ramer-Douglas-Peucker algorithm, and modified sigmoid function analyze runway imagery, identify defects and irregularities, and provide an automated rating of the gravel runway's smoothness condition. This novel vision-based system aims to enhance inspection capabilities for remote airports.
Automated Runway Inspection
The selected image segmentation models for the project were Mask R-CNN, PointRend, and Mask2Former. Initially, these models were trained and tested using the LARD dataset, which consisted of 1500 aerial front-view images of runways taken during the aircraft landing phase. The dataset was resized and split into an 8:2 train-validate ratio. After obtaining the UAV dataset of six remote airports, the team trained three models again to compare their performance.
The UAV dataset contained 6832 images, and the models were trained with a batch size of 2 and 100 epochs. Mask2Former outperformed the other two models regarding accuracy and intersection over union (IoU). The team conducted a final training session for Mask2Former to ensure the best fit before deployment. The dataset was augmented by manually adding water pools to improve the results of water pooling on the runway. It led to better IoU and accuracy for the overall performance.
To streamline the analysis of runway images, an automated pipeline was developed that integrated the essential stages of slicing, detecting, and merging. Large orthorectified images were segmented into smaller pieces for detailed analysis, and each sliced segment underwent a detailed detection process using the trained Mask2Former model. This step identified and classified points of interest (POIs) such as surface irregularities, water pooling, and vegetation encroachment. The system also evaluated the smoothness of the runways.
The research has significant potential for improving aviation safety and operational efficiency at remote airports with unpaved runways. The implications of the work extend beyond Northern Canada and offer several critical benefits globally. The methodology is wider than Northern Canada; countries with vast, sparsely populated areas, such as the United States, Australia, and New Zealand, rely on gravel runways to connect remote communities. Developing nations with limited road and rail infrastructure could also benefit from the automated runway inspection and maintenance system.
Conclusion
The paper introduced a novel approach for automating the monitoring and maintaining gravel runways at remote airports using UAV imagery and advanced computer vision techniques. The approach accurately detected and segmented runway defects such as water pooling, vegetation, and rough surfaces.
Extensive experimentation with diverse aerial images demonstrated the approach's effectiveness and robustness. The approach's potential applications extended beyond aviation, including infrastructure, agriculture, and environmental monitoring. The automated system offered a universal, effective, and user-friendly solution for airports globally.