Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks

In an article recently published in the journal Scientific Reports, researchers proposed a deep reinforcement learning (DRL)-based routing technique for collision-free vehicular ad-hoc network (VANET).

Study: Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks. Image credit: Summit Art Creations/Shutterstock
Study: Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks. Image credit: Summit Art Creations/Shutterstock

Background

VANETs primarily consist of different entities that are integrated for effective communication among themselves and with other associated services. VANETs are increasingly being used in transportation engineering to mitigate traffic guidance and congestion issues, optimize traffic flow, and augment road safety.

The classic VANET model includes roadside units (RSUs), traffic analyzers (TAs), and on-board units (OBUs). Additionally, VANETs can be classified into three communication models, including vehicle-to-vehicle (V2V) communication, vehicle-to-infrastructure (V2I) communication, and vehicle-to-roadside (V2R) communication.

The RSUs and OBUs facilitate effective communication between vehicles in both V2I and V2V scenarios, and the wireless technology used in VANETs, designated as wireless access in vehicle environment (WAVE), enables communication between RSUs and vehicles.

The WAVE communication system ensures passenger safety by providing real-time vehicle and traffic information updates. However, excessive control overhead and routing complexities are the major challenges in VANETs. Machine learning (ML) models can facilitate the route selection process in VANETs. Specifically, ML techniques can assist RSUs in effectively regulating vehicular mobility and mitigating traffic congestion.

The proposed approach

In this study, researchers proposed an improved deep reinforcement learning (IDRL) approach for routing the vehicle unit (VU) in VANETs by reducing the augmented control overhead. The IDRL routing technique can optimize the routing path and reduce the convergence time simultaneously in the dynamic vehicle density context.

Additionally, the IDRL can effectively predict, analyze, and monitor routing behavior by leveraging transmission capacity and vehicle data. Thus, the transmission delay reduction can be achieved using adjacent vehicles to transport packets through V2I communication. The proposed technique enables RSUs to maintain traffic information on IDRL-used roads to improve the network capacity performance.

The IDRL Routing Protocol: The IDRL model was utilized to predict the variation in vehicle density, which was then extrapolated to forecast vehicular motion on roadways. The proposed technique did not use a predetermined routing protocol, as the route selection was determined dynamically based on the transmission capacity and a high success probability upon reaching the IDRL driver.

The system was trained using the vehicle speed and movement relative to the nearby RSUs traversed by the VU. Additionally, the data was updated dynamically in response to the incoming unit's proximity to the RSU. Wired lines were used to interconnect the RSUs for entry vehicle data transmission within the study coverage area to disseminate information, specifically velocity, and position, to adjacent RSUs concerning the vehicle's arrival.

Determining the vehicle position using the global positioning system (GPS) is often challenging owing to the high velocity of moving vehicles and privacy and security concerns with GPS technology. The proposed DRL-based approach could mitigate the difficulty of locating the vehicle position. Overall, the entire protocol consisted of the route establishment phase (REP), optimal route establishment (ORE), route selection phase, and route selection phase using IDRL.

Experiment evaluation and findings

Researchers evaluated the performance of the proposed method using the Network Simulator (NS-2.35) simulation tool and performed the simulation in an area measuring 1000 m by 1000 m. They performed a comparative analysis of the proposed IDRL technique with existing peer routing techniques, including adaptive ranking-based improved opportunistic routing (ARIOR), adaptive ranking-based energy-efficient opportunistic routing (AREOR), and improved-AREOR (I-AREOR) to evaluate the packet delivery ratio (PDR).

The simulation outcomes were analyzed to evaluate the scalability and resilience of the IDRL model in mitigating the amplified overheads and delivering efficient routing concurrently. The proposed method demonstrated a high level of efficacy in transmitting messages that were safeguarded through the use of V2I communication. In the simulation results, the IDRL routing approach displayed a reduced latency, improved data reliability, and increased PDR compared to currently available routing techniques.

The IDRL routing technique attained a higher PDR compared to other techniques, with an average increase of 5% compared to ARIOR, the second-best technique, which was attributed to the connectivity information retention between units using announcement messages in the IDRL approach.

In the comparative analysis between PDR and vehicles with high and low densities, the proposed method displayed a 6.5% increase in performance compared to the ARIOR method. Other approaches demonstrated instability owing to their dependence on a distributed architecture. Moreover, the comparative analysis of control overhead in VANETs between the IDRL, ARIOR, I-AREOR, and AREOR also showed that the IDRL implementation led to a higher overhead control reduction than the existing methods.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, December 15). Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks. AZoAi. Retrieved on November 13, 2024 from https://www.azoai.com/news/20231215/Improved-Deep-Reinforcement-Learning-for-Collision-Free-Routing-in-Vehicular-Ad-Hoc-Networks.aspx.

  • MLA

    Dam, Samudrapom. "Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks". AZoAi. 13 November 2024. <https://www.azoai.com/news/20231215/Improved-Deep-Reinforcement-Learning-for-Collision-Free-Routing-in-Vehicular-Ad-Hoc-Networks.aspx>.

  • Chicago

    Dam, Samudrapom. "Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks". AZoAi. https://www.azoai.com/news/20231215/Improved-Deep-Reinforcement-Learning-for-Collision-Free-Routing-in-Vehicular-Ad-Hoc-Networks.aspx. (accessed November 13, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Improved Deep Reinforcement Learning for Collision-Free Routing in Vehicular Ad-Hoc Networks. AZoAi, viewed 13 November 2024, https://www.azoai.com/news/20231215/Improved-Deep-Reinforcement-Learning-for-Collision-Free-Routing-in-Vehicular-Ad-Hoc-Networks.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Reinforcement Learning Boosts Robotic Manipulation