Overview
This project aimed to enhance road safety and traffic efficiency by leveraging Vehicle-to-Vehicle (V2V) communication integrated with Reinforcement Learning (RL). Using the SUMO traffic simulator and TraCI, virtual agents trained with Deep Q-Learning (DQN) exchanged real-time critical data, enabling proactive decision-making in dynamic urban environments.
Technical Highlights
- V2V Simulation Environment: Utilized SUMO and TraCI to simulate realistic urban traffic scenarios, providing dynamic feedback to the RL agents.
- Deep Q-Network (DQN): Implemented a 3-layer neural network with state inputs including position, speed, acceleration, and lane information for ego and surrounding vehicles.
- Dynamic Scenario Testing: Incorporated randomly placed stopped vehicles, testing agent adaptability and proactive collision avoidance.
Challenges and Solutions
- Complex Scenario Simulation: Created diverse traffic conditions with dynamic vehicle placement to test RL agent robustness.
- Adaptive Learning: Fine-tuned hyperparameters and implemented experience replay for efficient learning in complex environments.
Technologies and Tools
- Python, PyTorch, SUMO, TraCI
- Deep Q-Network (DQN)
My Role
Led the development and training of the DQN models, optimizing RL strategies to ensure robust vehicle interactions and collision avoidance.
Initial Episodes:
Key Results
The RL-trained agents significantly reduced collision rates and improved traffic flow by effectively navigating dynamic and unpredictable scenarios. The agents demonstrated adaptive behaviors, maintaining safe distances and efficiently executing maneuvers such as lane changes and stops.