Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Terminated/truncated support and Gymnasium wrapper #143

Merged
merged 19 commits into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/unittest/install_dependencies.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

python -m pip install --upgrade pip

pip install -e .
pip install -e ".[gymnasium]"

python -m pip install flake8 pytest pytest-cov tqdm matplotlib==3.8
python -m pip install cvxpylayers # Navigation heuristic
46 changes: 31 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Scenario creation is made simple and modular to incentivize contributions.
VMAS simulates agents and landmarks of different shapes and supports rotations, elastic collisions, joints, and custom gravity.
Holonomic motion models are used for the agents to simplify simulation. Custom sensors such as LIDARs are available and the simulator supports inter-agent communication.
Vectorization in [PyTorch](https://pytorch.org/) allows VMAS to perform simulations in a batch, seamlessly scaling to tens of thousands of parallel environments on accelerated hardware.
VMAS has an interface compatible with [OpenAI Gym](https://github.com/openai/gym), with [RLlib](https://docs.ray.io/en/latest/rllib/index.html), with [torchrl](https://github.com/pytorch/rl) and its MARL training library: [BenchMARL](https://github.com/facebookresearch/BenchMARL),
VMAS has an interface compatible with [OpenAI Gym](https://github.com/openai/gym), with [Gymnasium](https://gymnasium.farama.org/), with [RLlib](https://docs.ray.io/en/latest/rllib/index.html), with [torchrl](https://github.com/pytorch/rl) and its MARL training library: [BenchMARL](https://github.com/facebookresearch/BenchMARL),
enabling out-of-the-box integration with a wide range of RL algorithms.
The implementation is inspired by [OpenAI's MPE](https://github.com/openai/multiagent-particle-envs).
Alongside VMAS's scenarios, we port and vectorize all the scenarios in MPE.
Expand Down Expand Up @@ -113,28 +113,37 @@ git clone https://github.com/proroklab/VectorizedMultiAgentSimulator.git
cd VectorizedMultiAgentSimulator
pip install -e .
```
By default, vmas has only the core requirements. Here are some optional packages you may want to install:
By default, vmas has only the core requirements. To install further dependencies to enable training with [Gymnasium](https://gymnasium.farama.org/) wrappers, [RLLib](https://docs.ray.io/en/latest/rllib/index.html) wrappers, for rendering, and testing, you may want to install these further options:
```bash
# Training
pip install "ray[rllib]"==2.1.0 # We support versions "ray[rllib]<=2.2,>=1.13"
pip install torchrl
# install gymnasium for gymnasium wrappers
pip install vmas[gymnasium]

# Logging
pip installl wandb
# install rllib for rllib wrapper
pip install vmas[rllib]
matteobettini marked this conversation as resolved.
Show resolved Hide resolved

# Rendering
pip install opencv-python moviepy matplotlib
# install rendering dependencies
pip install vmas[render]

# Tests
pip install pytest pyyaml pytest-instafail tqdm
# install testing dependencies
pip install vmas[test]

# install all dependencies
pip install vmas[all]
```

You can also install the following training libraries:

```bash
pip install benchmarl # For training in BenchMARL
pip install torchrl # For training in TorchRL
pip install "ray[rllib]"==2.1.0 # For training in RLlib. We support versions "ray[rllib]<=2.2,>=1.13"
```

### Run

To use the simulator, simply create an environment by passing the name of the scenario
you want (from the `scenarios` folder) to the `make_env` function.
The function arguments are explained in the documentation. The function returns an environment
object with the OpenAI gym interface:
The function arguments are explained in the documentation. The function returns an environment object with the VMAS interface:

Here is an example:
```python
Expand All @@ -143,17 +152,24 @@ Here is an example:
num_envs=32,
device="cpu", # Or "cuda" for GPU
continuous_actions=True,
wrapper=None, # One of: None, vmas.Wrapper.RLLIB, and vmas.Wrapper.GYM
wrapper=None, # One of: None, "rllib", "gym", "gymnasium", "gymnasium_vec"
max_steps=None, # Defines the horizon. None is infinite horizon.
seed=None, # Seed of the environment
dict_spaces=False, # By default tuple spaces are used with each element in the tuple being an agent.
# If dict_spaces=True, the spaces will become Dict with each key being the agent's name
grad_enabled=False, # If grad_enabled the simulator is differentiable and gradients can flow from output to input
terminated_truncated=False, # If terminated_truncated the simulator will return separate `terminated` and `truncated` flags in the `done()`, `step()`, and `get_from_scenario()` functions instead of a single `done` flag
**kwargs # Additional arguments you want to pass to the scenario initialization
)
```
A further example that you can run is contained in `use_vmas_env.py` in the `examples` directory.

With the `terminated_truncated` flag set to `True`, the simulator will return separate `terminated` and `truncated` flags
in the `done()`, `step()`, and `get_from_scenario()` functions instead of a single `done` flag.
This is useful when you want to know if the environment is done because the episode has ended or
because the maximum episode length/ timestep horizon has been reached.
See [the Gymnasium documentation](https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/) for more details on this.

#### RLlib

To see how to use VMAS in RLlib, check out the script in `examples/rllib.py`.
Expand Down Expand Up @@ -235,7 +251,7 @@ Each format will work regardless of the fact that tuples or dictionary spaces ha
- **Simple**: Complex vectorized physics engines exist (e.g., [Brax](https://github.com/google/brax)), but they do not scale efficiently when dealing with multiple agents. This defeats the computational speed goal set by vectorization. VMAS uses a simple custom 2D dynamics engine written in PyTorch to provide fast simulation.
- **General**: The core of VMAS is structured so that it can be used to implement general high-level multi-robot problems in 2D. It can support adversarial as well as cooperative scenarios. Holonomic point-robot simulation has been chosen to focus on general high-level problems, without learning low-level custom robot controls through MARL.
- **Extensible**: VMAS is not just a simulator with a set of environments. It is a framework that can be used to create new multi-agent scenarios in a format that is usable by the whole MARL community. For this purpose, we have modularized the process of creating a task and introduced interactive rendering to debug it. You can define your own scenario in minutes. Have a look at the dedicated section in this document.
- **Compatible**: VMAS has wrappers for [RLlib](https://docs.ray.io/en/latest/rllib/index.html), [torchrl](https://pytorch.org/rl/reference/generated/torchrl.envs.libs.vmas.VmasEnv.html), and [OpenAI Gym](https://github.com/openai/gym). RLlib and torchrl have a large number of already implemented RL algorithms.
- **Compatible**: VMAS has wrappers for [RLlib](https://docs.ray.io/en/latest/rllib/index.html), [torchrl](https://pytorch.org/rl/reference/generated/torchrl.envs.libs.vmas.VmasEnv.html), [OpenAI Gym](https://github.com/openai/gym) and [Gymnasium](https://gymnasium.farama.org/). RLlib and torchrl have a large number of already implemented RL algorithms.
Keep in mind that this interface is less efficient than the unwrapped version. For an example of wrapping, see the main of `make_env`.
- **Tested**: Our scenarios come with tests which run a custom designed heuristic on each scenario.
- **Entity shapes**: Our entities (agent and landmarks) can have different customizable shapes (spheres, boxes, lines).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
"sphinx": ("https://www.sphinx-doc.org/en/master/", None),
"torch": ("https://pytorch.org/docs/master", None),
"torch": ("https://pytorch.org/docs/stable/", None),
"torchrl": ("https://pytorch.org/rl/stable/", None),
"tensordict": ("https://pytorch.org/tensordict/stable", None),
"benchmarl": ("https://benchmarl.readthedocs.io/en/latest/", None),
Expand Down
27 changes: 22 additions & 5 deletions docs/source/usage/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,21 @@ Install optional requirements
By default, vmas has only the core requirements.
Here are some optional packages you may want to install.

Wrappers
^^^^^^^^

If you want to use VMAS environment wrappers, you may want to install VMAS
with the following options:

.. code-block:: console

# install gymnasium for gymnasium wrapper
pip install vmas[gymnasium]

# install rllib for rllib wrapper
pip install vmas[rllib]


Training
^^^^^^^^

Expand All @@ -40,12 +55,14 @@ You may want to install one of the following training libraries
pip install torchrl
pip install "ray[rllib]"==2.1.0 # We support versions "ray[rllib]<=2.2,>=1.13"

Logging
^^^^^^^
Utils
^^^^^

You may want to install the following rendering and logging tools
You may want to install the following additional tools

.. code-block:: console

pip install wandb
pip install opencv-python moviepy matplotlib
# install rendering dependencies
pip install vmas[render]
# install testing dependencies
pip install vmas[test]
6 changes: 6 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,11 @@ def get_version():
author_email="[email protected]",
packages=find_packages(),
install_requires=["numpy", "torch", "pyglet<=1.5.27", "gym", "six"],
extras_require={
"gymnasium": ["gymnasium", "shimmy"],
"rllib": ["ray[rllib]<=2.2"],
"render": ["opencv-python", "moviepy", "matplotlib", "opencv-python"],
"test": ["pytest", "pytest-instafail", "pyyaml", "tqdm"],
},
include_package_data=True,
)
11 changes: 3 additions & 8 deletions tests/test_vmas.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
# ProrokLab (https://www.proroklab.org/)
# All rights reserved.
import math
import os
import random
import sys
from pathlib import Path
Expand All @@ -18,13 +17,9 @@
def scenario_names():
scenarios = []
scenarios_folder = Path(__file__).parent.parent / "vmas" / "scenarios"
for _, _, filenames in os.walk(scenarios_folder):
scenarios += filenames
scenarios = [
scenario.split(".")[0]
for scenario in scenarios
if scenario.endswith(".py") and not scenario.startswith("__")
]
for path in scenarios_folder.glob("**/*.py"):
if path.is_file() and not path.name.startswith("__"):
scenarios.append(path.stem)
return scenarios


Expand Down
3 changes: 3 additions & 0 deletions tests/test_wrappers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Copyright (c) 2024.
# ProrokLab (https://www.proroklab.org/)
# All rights reserved.
141 changes: 141 additions & 0 deletions tests/test_wrappers/test_gym_wrapper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
# Copyright (c) 2024.
# ProrokLab (https://www.proroklab.org/)
# All rights reserved.

import gym
import numpy as np
import pytest
from torch import Tensor

from vmas import make_env
from vmas.simulator.environment import Environment


TEST_SCENARIOS = [
"balance",
"discovery",
"give_way",
"joint_passage",
"navigation",
"passage",
"transport",
"waterfall",
"simple_world_comm",
]


def _check_obs_type(obss, obs_shapes, dict_space, return_numpy):
if dict_space:
assert isinstance(
obss, dict
), f"Expected dictionary of observations, got {type(obss)}"
for k, obs in obss.items():
obs_shape = obs_shapes[k]
assert (
obs.shape == obs_shape
), f"Expected shape {obs_shape}, got {obs.shape}"
if return_numpy:
assert isinstance(
obs, np.ndarray
), f"Expected numpy array, got {type(obs)}"
else:
assert isinstance(
obs, Tensor
), f"Expected torch tensor, got {type(obs)}"
else:
assert isinstance(
obss, list
), f"Expected list of observations, got {type(obss)}"
for obs, shape in zip(obss, obs_shapes):
assert obs.shape == shape, f"Expected shape {shape}, got {obs.shape}"
if return_numpy:
assert isinstance(
obs, np.ndarray
), f"Expected numpy array, got {type(obs)}"
else:
assert isinstance(
obs, Tensor
), f"Expected torch tensor, got {type(obs)}"


@pytest.mark.parametrize("scenario", TEST_SCENARIOS)
@pytest.mark.parametrize("return_numpy", [True, False])
@pytest.mark.parametrize("continuous_actions", [True, False])
@pytest.mark.parametrize("dict_space", [True, False])
def test_gym_wrapper(
scenario, return_numpy, continuous_actions, dict_space, max_steps=10
):
env = make_env(
scenario=scenario,
num_envs=1,
device="cpu",
continuous_actions=continuous_actions,
dict_spaces=dict_space,
wrapper="gym",
wrapper_kwargs={"return_numpy": return_numpy},
max_steps=max_steps,
)

assert (
len(env.observation_space) == env.unwrapped.n_agents
), "Expected one observation per agent"
assert (
len(env.action_space) == env.unwrapped.n_agents
), "Expected one action per agent"
if dict_space:
assert isinstance(
env.observation_space, gym.spaces.Dict
), "Expected Dict observation space"
assert isinstance(
env.action_space, gym.spaces.Dict
), "Expected Dict action space"
obs_shapes = {
k: obs_space.shape for k, obs_space in env.observation_space.spaces.items()
}
else:
assert isinstance(
env.observation_space, gym.spaces.Tuple
), "Expected Tuple observation space"
assert isinstance(
env.action_space, gym.spaces.Tuple
), "Expected Tuple action space"
obs_shapes = [obs_space.shape for obs_space in env.observation_space.spaces]

assert isinstance(
env.unwrapped, Environment
), "The unwrapped attribute of the Gym wrapper should be a VMAS Environment"

obss = env.reset()
_check_obs_type(obss, obs_shapes, dict_space, return_numpy=return_numpy)

for _ in range(max_steps):
actions = [
env.unwrapped.get_random_action(agent).numpy()
for agent in env.unwrapped.agents
]
obss, rews, done, info = env.step(actions)
_check_obs_type(obss, obs_shapes, dict_space, return_numpy=return_numpy)

assert len(rews) == env.unwrapped.n_agents, "Expected one reward per agent"
if not dict_space:
assert isinstance(
rews, list
), f"Expected list of rewards but got {type(rews)}"

rew_values = rews
else:
assert isinstance(
rews, dict
), f"Expected dictionary of rewards but got {type(rews)}"
rew_values = list(rews.values())
assert all(
isinstance(rew, float) for rew in rew_values
), f"Expected float rewards but got {type(rew_values[0])}"

assert isinstance(done, bool), f"Expected bool for done but got {type(done)}"

assert isinstance(
info, dict
), f"Expected info to be a dictionary but got {type(info)}"

assert done, "Expected done to be True after 100 steps"
Loading
Loading