Skip to content

Latest commit

 

History

History
72 lines (60 loc) · 1.96 KB

README.md

File metadata and controls

72 lines (60 loc) · 1.96 KB

MPC*RL

A framework for integrating Model Predictive Control (MPC) and Single-agent Reinforcement Learning (RL) for autonomous vehicle control in complex unsignalized intersection driving environments. The primary focus is on optimizing vehicle trajectories and control strategies using MPC, with future extensions for RL enhancements.

Installation

Clone the repository:

git clone https://github.com/SaeedRahmani/MPC-RL_for_AVs.git
cd MPC-RL_for_AVs

Install in the develop mode:

pip install -e .
# pip install -r requirements.txt

Structure

tree 

│   .gitattributes
│   .gitignore
│   README.md
│   requirements.txt
│   setup.py
│
├───agents
│   │   a2c_mpc.py
│   │   base.py
│   │   ppo_mpc.py
│   │   pure_mpc.py
│   │   utils.py
│   │   __init__.py
│
├───config
│   │   cfg.yaml
│   │   config.py
│   │   __init__.py
│
├───main
│   │   run_pure_mpc.py
│   │   train_a2c_mpc.py
│   │
│   └───test_functionality
│           test_sb3.py
│           test_traj.py
│
└───trainers
    │   trainer.py
    │   utils.py
    │   __init__.py

To-do List

  • Speed: Improve the speed of training
  • Algorithm: Add other RL algorithms
  • Animation: Enhance the visualization by adding vehicle shapes and orientations.

Completed

  • Config: Make a full configuration file and output structure, loggings.
  • MPC: generate new reference speed if collision is detected.
  • MPC: Adapt collision cost and vehicle dynamics updates for other agent vehicles.
  • MPC+RL: Integrate PureMPC_Agent as a component into stable_baselines3 agents.
  • RL: Add stable_baselines as dependencies.
  • Config: Add hydra and YAML configuration files for better organization of training and testing parameters.
  • MPC: Fixed the issue where the MPC agent did not follow the reference trajectory correctly.