Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads

In a paper published in the journal Vehicles, researchers presented an innovative approach to autonomous vehicle navigation, challenging the conventional reliance on road markings and specialized hardware by introducing a novel method that employed machine vision, machine learning (ML), and artificial intelligence (AI), leveraging pre-trained Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to guide vehicles on roads without lane markings.

Study: Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads. Image credit: Suwin/Shutterstock
Study: Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads. Image credit: Suwin/Shutterstock

They conducted experiments using the Autonomous Campus Transport (ACTor) vehicle, equipped with an economical webcam-based sensing system and minimal computational resources. The results of the study demonstrated the feasibility of autonomously navigating unmarked roads while maintaining satisfactory road-following behavior.

Context

Recent advancements in technology and AI have sparked significant interest in autonomous vehicles, potentially revolutionizing transportation and road safety. ML algorithms, mainly Artificial Neural Networks (ANNs), have been a critical focus in enabling these vehicles to perceive their environment and make decisions. While ANNs, including CNNs, have shown promise, improving their ability to adapt to new situations and environments remains challenging. One avenue for enhancement is transfer learning, using pre-trained CNNs to leverage extensive data for more robust performance.

Previous work in autonomous vehicle research, exemplified by the 1988 ALVINN project, laid the foundation for using neural networks to guide vehicles by processing visual data. Recent advancements have made CNNs vital in image-based tasks within the autonomous driving domain. Transfer learning, leveraging pre-trained networks on large datasets, has emerged as a strategy to enhance vehicle performance. Building on previous research known as "Deep Steer," this study addresses limitations and introduces RNNs to improve driving behavior. The primary focus is refining autonomous vehicles' performance on unmarked roads, reflecting the ongoing quest for safer and more versatile self-driving transportation.

Research Methodology and Infrastructure

The research methodology comprised three steps: Data Collection, Model Training, Self-Drive, and Inferencing. During the Data Collection phase, the researchers collected essential data by driving the test vehicle on various routes while recording forward-facing images and associated steering wheel angles. This data was crucial for training and evaluating neural networks to predict the required steering wheel angle for safe navigation on different roads.

The researchers captured images at a sample rate of 5 Hz, specifically chosen to match the vehicle's target speed of 5 mph. The routes covered a range of corner curvatures and intentionally lacked lane markings, simulating real-world scenarios where the absence of lane markings presented a challenge for autonomous navigation. By collecting data on different road surfaces and conditions, the research aimed to create a comprehensive dataset that would support the neural network training process.

The hardware used for this research centered around the ACTor vehicle, which served as the experimental platform. The researchers equipped the ACTor vehicle with a low-cost webcam-based sensing system and intentionally used minimal computational resources. A Genius webcam with a 120°-wide angle lens was mounted on the front of the vehicle, providing 1080p Full High Definition (HD) video recording at up to 30 fps. The researchers conducted data collection and processing on an Ubuntu laptop with deliberately modest hardware specifications.

While the laptop featured an Intel(R) Core i7-4500U Central Processing Unit (CPU) at 1.8 GHz with four cores and 8.0 GB of physical memory, it effectively supported the research's data processing needs. The choice of hardware, while not state-of-the-art, demonstrated that the proposed solution could be implemented with readily available and cost-effective components, making it more accessible for potential applications in various settings.

Regarding software, the research relied on two environments to develop and evaluate the Deep Steer solutions: an in-vehicle system based on the Robotic Operating System (ROS) and an offline environment using JupyterLab with Python. The ROS-based in-vehicle system facilitated the acquisition of neural network training data and the execution of the Deep Steer algorithms for vehicle control on private and public roads. It integrated various components, including the Data Speed drive-by-wire system, webcam, image processing, and neural network evaluations.

In contrast, the offline environment centered around JupyterLab and Python, employing essential libraries such as Keras and TensorFlow for neural network construction and training, Pandas Data Frames for data manipulation, SciKit-Learn for dataset splitting, and Open Source Computer Vision Library (OpenCV) for general image processing and histogram matching. These software tools and frameworks enabled the development and training of the neural networks underpinning the Deep Steer solution.

Combining these steps and the hardware and software resources created a structured and comprehensive approach to researching and implementing autonomous vehicle control on unmarked roads. By collecting representative data, utilizing accessible hardware, and leveraging widely used software tools, the research aimed to provide practical insights into addressing the unique challenges of autonomous navigation without the reliance on lane markings or specialized equipment.

Study Findings

The research involved the training, validating, and testing Inception Version 3 (InceptionV3), Visual Geometry Group 16 (VGG16), and VGG19-based neural networks for steering prediction. These networks utilized webcam images collected from an ACTor vehicle in the study. The InceptionV3 model displayed the most robust performance, reducing the mean absolute error (MAE) to 3.10°, marking a significant improvement over previous research.

The models were assessed on an application route, showcasing their effectiveness in predicting steering angles on unseen data. Furthermore, the RNN model achieved a lower MAE of 3.82°, offering a promising alternative. The study pinpointed areas for future research, including addressing speed dependency and optimizing the solution to reduce data requirements and computational resources for autonomous steering.

Conclusion

To sum up, real-time testing on the application route revealed that the InceptionV3-based model outperformed the VGG16 and VGG19 models. The InceptionV3 model allowed the vehicle to navigate the route accurately, with minimal steering errors, even through intersections and past driveways. Conversely, the VGG16 and VGG19 models occasionally under-predicted steering, causing the vehicle to veer off the road and end the test prematurely. Based on these findings, researchers recommend the InceptionV3-based model. Furthermore, adding RNN improved prediction accuracy and enhanced road-following behavior during the testing and training.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, October 18). Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads. AZoAi. Retrieved on October 05, 2024 from https://www.azoai.com/news/20231018/Redefining-Autonomous-Vehicle-Navigation-Machine-Vision-and-Deep-Learning-on-Unmarked-Roads.aspx.

  • MLA

    Chandrasekar, Silpaja. "Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads". AZoAi. 05 October 2024. <https://www.azoai.com/news/20231018/Redefining-Autonomous-Vehicle-Navigation-Machine-Vision-and-Deep-Learning-on-Unmarked-Roads.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads". AZoAi. https://www.azoai.com/news/20231018/Redefining-Autonomous-Vehicle-Navigation-Machine-Vision-and-Deep-Learning-on-Unmarked-Roads.aspx. (accessed October 05, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Redefining Autonomous Vehicle Navigation: Machine Vision and Deep Learning on Unmarked Roads. AZoAi, viewed 05 October 2024, https://www.azoai.com/news/20231018/Redefining-Autonomous-Vehicle-Navigation-Machine-Vision-and-Deep-Learning-on-Unmarked-Roads.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Boosts Security In Virtual Networks By Tackling Complex Intrusion Detection Challenges