A framework for integrating Model Predictive Control (MPC) and Single-agent Reinforcement Learning (RL) for autonomous vehicle control in complex unsignalized intersection driving environments. The primary focus is on optimizing vehicle trajectories and control strategies using MPC, with future extensions for RL enhancements.
Clone the repository:
git clone https://github.com/SaeedRahmani/MPC-RL_for_AVs.git
cd MPC-RL_for_AVs
Install in the develop mode:
pip install -e .
# pip install -r requirements.txt
tree
│ .gitattributes
│ .gitignore
│ README.md
│ requirements.txt
│ setup.py
│
├───agents
│ │ a2c_mpc.py
│ │ base.py
│ │ ppo_mpc.py
│ │ pure_mpc.py
│ │ utils.py
│ │ __init__.py
│
├───config
│ │ cfg.yaml
│ │ config.py
│ │ __init__.py
│
├───main
│ │ run_pure_mpc.py
│ │ train_a2c_mpc.py
│ │
│ └───test_functionality
│ test_sb3.py
│ test_traj.py
│
└───trainers
│ trainer.py
│ utils.py
│ __init__.py
-
Speed
: Improve the speed of training -
Algorithm
: Add other RL algorithms -
Animation
: Enhance the visualization by adding vehicle shapes and orientations.
-
Config
: Make a full configuration file and output structure, loggings. -
MPC
: generate new reference speed if collision is detected. -
MPC
: Adapt collision cost and vehicle dynamics updates for other agent vehicles. -
MPC+RL
: IntegratePureMPC_Agent
as a component intostable_baselines3
agents. -
RL
: Add stable_baselines as dependencies. -
Config
: Addhydra
andYAML
configuration files for better organization of training and testing parameters. -
MPC
: Fixed the issue where the MPC agent did not follow the reference trajectory correctly.