Read below to learn more.
MobilityGen is a toolset built on NVIDIA Isaac Sim that enables you to easily collect data for mobile robots.
It supports
-
Rich ground truth data
- Occupancy Map
- Pose
- Joint Positions / Velocities
- RGB Images
- Segmentation Images
- Depth Images
- If you're interested in more, let us know!
-
Many robot types
- Differential drive - Jetbot, Carter
- Quadruped - Spot
- Humanoid - H1
- Implement your own by subclassing the Robot class
-
Many data collection methods
- Manual - Keyboard Teleoperation, Gamepad Teleoperation
- Automated - Random Accelerations, Random Path Following
- Implement your own by subclassing the Scenario class
This enables you to train models and test algorithms related to robot mobility.
To get started with MobilityGen follow the setup and usage instructions below!
Follow these steps to set up MobilityGen
- Using the
Omniverse Launcher
installIsaac Sim 4.2.0
-
Clone the repository
git clone <repo_url> MobilityGen
Remember the path you cloned to for the next step.
Next, we'll call link_app.sh
to link the Isaac Sim installation directory to the local app
folder.
-
Navigate to the repo root
cd MobilityGen
-
Run the following to link the
app
folder./link_app.sh
-
Navigate to the path planner directory
cd MobilityGen/path_planner
-
Install with pip using the Isaac Sim python interpreter
../app/python.sh -m pip install -e .
-
Navigate to the repo root
cd MobilityGen
-
Launch Isaac Sim with required extensions enabled by calling
./scripts/launch_sim.sh
That's it! If everything worked, you should see Isaac Sim open with a window titled MobilityGen
appear.
Read Usage below to learn how to generate data with MobilityGen.
Below details a typical workflow for collecting data with MobilityGen.
-
Navigate to the repo root
cd MobilityGen
-
Launch Isaac Sim with required extensions enabled by calling
./scripts/launch_sim.sh
This assumes you see the MobilityGen extension window.
-
Under Scene USD URL / Path copy and paste the following
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.2/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd
-
Under the
Scenario
dropdown selectKeyboardTeleoperationScenario
to start -
Under the
Robot
dropdown selectH1Robot
-
Click
Build
After a few seconds, you should see the scene and occupancy map appear.
- Click the
Reset
function to randomly initialize the scenario. Do this until the robot spawns inside the warehouse.
Before you start recording, try moving the robot around to get a feel for it
To move the robot, use the following keys
W
- Move ForwardA
- Turn LeftS
- Move BackwardsD
- Turn right
Once you're comfortable, you can record a log.
-
Click
Start Recording
to start recording a log.You should now see a recording name and the recording duration change.
-
Move the robot around
-
Click
Stop Recording
to stop recording.
The data is recorded to ~/MobilityGenData/recordings
by default.
If you've gotten this far, you've recorded a trajectory, but it doesn't include the rendered sensor data.
Rendering the sensor data is done offline. To do this call the following
-
Close Isaac Sim if it's running
-
Navigate to the repo root
cd MobilityGen
-
Run the
scripts/replay_directory.py
script to replay and render all recordings in the directorypython scripts/replay_directory.py --render_interval=200
Note: For speed for this tutorial, we use a render interval of 200. If our physics timestep is 200 FPS, this means we render 1 image per second.
That's it! Now the data with renderings should be stored in ~/MobilityGenData/replays
We provide a few examples in the examples folder for working with the data.
One example is using Gradio to explore all of the recordings in the replays directory. To run this example, call the following
-
Call the gradio data visualization example script
python examples/04_visualize_gradio.py
-
Open your web browser to
http://127.0.0.1:7860
to explore the data
If everything worked, you should be able to view the data in the browser.
That's it! Once you've gotten the hang of how to record data, you might try
-
Record data using one of the procedural methods (like
RandomAccelerationScenario
orRandomPathFollowingScenario
).These methods don't rely on human input, and automatically "restart" when finished to create new recordings.
-
Implement or customize your own Robot class.
-
Implement or customize your own Scenario class.
If you find MobilityGen helpful for your use case, run in to issues, or have any questions please let us know!.
This is the same as in the basic usage.
-
Navigate to the repo root
cd MobilityGen
-
Launch Isaac Sim with required extensions enabled by calling
./scripts/launch_sim.sh
-
Under Scene USD URL / Path copy and paste the following
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.2/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd
-
Under the
Scenario
dropdown selectRandomPathFollowingScenario
orRandomAccelerationScenario
-
Under the
Robot
dropdown selectH1Robot
-
Click
Build
After a few seconds, you should see the scene and occupancy map appear.
-
Click
Start Recording
to start recording data -
Go grab some coffee!
The procedural generated methods automatically determine when to reset (ie: if the robot collides with an object and needs to respawn). If you run into any issues with the procedural methods getting stuck, please let us know.
-
Click
Stop Recording
to stop recording data.
The data is recorded to ~/MobilityGenData/recordings
by default.
This is the same as before. Please refer to Step 6-7 of the "Basic Usage" guide.
You can implement a new robot for use with MobilityGen.
The general workflow is as follows:
- Subclass the Robot class.
- Implement the
build()
method. This method is responsible for adding the robot to the USD stage. - Implement the
write_action()
method. This method performs the logic of applying the linear, angular velocity command. - Overwrite the common class parameters (like
physics_dt
,occupancy_map_z_min
, etc.) - Register the robot class by using the
ROBOT.register()
decorator. This makes the custom robot discoverable.
We recommend referencing the example robots in robots.py for more details.
A good way to start could be simply by modifying an existing robot. For example, you might change the position at which the camera is mounted on the H1 robot.
You can implement a new data recording scenario for use with MobilityGen.
The general workflow is as follows:
- Subclass the Scenario class.
- Implement the
reset()
method. This method is responsible for randomizing / initializing the scenario (ie: spawning the robot). - Implement the
step()
method. This method is responsible for incrementing the scenario by one physics step. - Register the scenario class by using the
SCENARIOS.register()
decorator. This makes the custom scenario discoverable.
We recommend referencing the example scenarios in scenarios.py for more details.
A good way to start could be simply by modifying an existing scenario. For example, you might implement a new method for generating random motions.
MobilityGen records two types of data.
- Static Data is recorded at the beginning of a recording
- Occupancy map
- Configuration info
- Robot type
- Scenario type
- Scene USD URL
- USD Stage
- State Data is recorded at each physics timestep
- Robot action: Linear, angular velocity
- Robot pose: Position, quaternion
- Robot joint Positions / Velocities
- Robot sensor data:
- Depth image
- RGB Image
- Segmentation image / info
This data can easily be read using the Reader class.
from reader import Reader
reader = Reader(recording_path="replays/2025-01-17T16:44:33.006521")
print(len(reader)) # print number of timesteps
state_dict = reader.read_state_dict(0) # read timestep 0
The state_dict has the following schema
{
"robot.action": np.ndarray, # [2] - Linear, angular command velocity
"robot.position": np.ndarray, # [3] - XYZ
"robot.orientation": np.ndarray, # [4] - Quaternion
"robot.joint_positions": np.ndarray, # [J] - Joint positions
"robot.joint_velocities": np.ndarray, # [J] - Joint velocities
"robot.front_camera.left.rgb_image": np.ndarray, # [HxWx3], np.uint8 - RGB image
"robot.front_camera.left.depth_image": np.ndarray, # [HxW], np.fp32 - Depth in meters
"robot.front_camera.left.segmentation_image": np.ndarray, # [HxW], np.uint8 - Segmentation class index
"robot.front_camera.left.segmentation_info": dict, # see Isaac replicator segmentation info format
...
}
The Reader
class abstracts away the details of reading the state dictionary
from the recording.
In case you're interested, each recording is represented as a directory with the following structure
2025-01-17T16:44:33.006521/
occupancy_map/
map.png
map.yaml
config.json
stage.usd
state/
common/
00000000.npy
00000001.npy
...
depth/
robot.front_camera.left.depth_image/
00000000.png
00000001.png
...
robot.front_camera.right.depth_image/
...
rgb/
robot.front_camera.left.rgb_image/
00000000.jpg
00000001.jpg
robot.front_camera.right.rgb_image/
...
segmentation/
robot.front_camera.left.segmentation_image/
00000000.png
00000001.png
...
robot.front_camera.right.segmentation_image/
...
Most of the state information is captured under the state/common
folder, as dictionary in a single .npy
file.
However, for some data (images) this is inefficient. These instead get captured in their own folder based on the data type and the name. (ie: rgb/robot.front_camera.left.depth_image).
The name of each file corresponds to its physics timestep.
If you have any questions regarding the data logged by MobilityGen, please let us know!
This Developer Certificate of Origin applies to this project.
John Welsh, Huihua Zhao, Vikram Ramasamy, Wei Liu, Joydeep Biswas, Soha Pouya, Yan Chang