International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 186 - Number 75 |
Year of Publication: 2025 |
Authors: Isha Das, Md. Jisan Ahmed |
![]() |
Isha Das, Md. Jisan Ahmed . Distributed Deep Reinforcement Learning for Decentralized Autonomous Vehicle Coordination in Urban Environment. International Journal of Computer Applications. 186, 75 ( Mar 2025), 43-51. DOI=10.5120/ijca2025924643
This research tackles the problem of coordinating self-driving vehicles in crowded cities using a decentralized strategy based on deep reinforcement learning. This research seeks to design a smart and sturdy framework that can assist numerous agents in driving differently in real time, contributing to reduced crowding and higher safety overall. Each automobile may smartly work together with surrounding cars without a centralized controller by making local judgments and with selective information sharing. The results demonstrate that crashes and average traveling time considerably diminish in different traffic circumstances. This would enhance traffic flow and possibly enable self-organizing traffic systems. City planners and car manufacturers can employ this decentralized strategy for major traffic control schemes, which can help in smooth commuting and better load on infrastructure. Unlike what’s been done before, this work provides a unique aspect by emphasizing on-the-fly flexibility and strong reward shaping in a truly distributed architecture. The study’s distinctive contribution is in proving that coordination of multi-agents can be performed and sustained despite communication latencies as well as large vehicle densities. The suggested technology permits on-the-fly collaboration among autonomous cars, a critical step towards safer, greener, and more vibrant urban travel.